Test Report: Docker_Linux_docker_arm64 12230

                    
                      4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0:2021-08-11:19925
                    
                

Test fail (14/246)

x
+
TestAddons/parallel/Registry (174.52s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:284: registry stabilized in 34.2655ms
addons_test.go:286: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:340: "registry-dzdlw" [4a872b2d-a2b1-46f9-9afd-c52b6647383f] Running
addons_test.go:286: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.01401437s
addons_test.go:289: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:340: "registry-proxy-xfrxz" [19d31762-bc36-413f-8533-e97b57d38a28] Running / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
addons_test.go:289: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.007653227s
addons_test.go:294: (dbg) Run:  kubectl --context addons-20210811003021-1387367 delete po -l run=registry-test --now
addons_test.go:299: (dbg) Run:  kubectl --context addons-20210811003021-1387367 run --rm registry-test --restart=Never --image=busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:299: (dbg) Done: kubectl --context addons-20210811003021-1387367 run --rm registry-test --restart=Never --image=busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (2.51520973s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p addons-20210811003021-1387367 ip
2021/08/11 00:32:54 [DEBUG] GET http://192.168.49.2:5000
2021/08/11 00:32:54 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:32:54 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2021/08/11 00:32:55 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:32:55 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2021/08/11 00:32:57 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:32:57 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2021/08/11 00:33:01 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:33:01 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2021/08/11 00:33:09 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:33:09 [DEBUG] GET http://192.168.49.2:5000
2021/08/11 00:33:09 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:33:09 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2021/08/11 00:33:10 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:33:10 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2021/08/11 00:33:12 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:33:12 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2021/08/11 00:33:16 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:33:16 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2021/08/11 00:33:24 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:33:25 [DEBUG] GET http://192.168.49.2:5000
2021/08/11 00:33:25 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:33:25 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2021/08/11 00:33:26 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:33:26 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2021/08/11 00:33:28 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:33:28 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2021/08/11 00:33:32 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:33:32 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2021/08/11 00:33:40 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:33:41 [DEBUG] GET http://192.168.49.2:5000
2021/08/11 00:33:41 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:33:41 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2021/08/11 00:33:42 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:33:42 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2021/08/11 00:33:44 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:33:44 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2021/08/11 00:33:48 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:33:48 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2021/08/11 00:33:56 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:33:58 [DEBUG] GET http://192.168.49.2:5000
2021/08/11 00:33:58 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:33:58 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2021/08/11 00:33:59 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:33:59 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2021/08/11 00:34:01 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:34:01 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2021/08/11 00:34:05 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:34:05 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2021/08/11 00:34:13 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:34:15 [DEBUG] GET http://192.168.49.2:5000
2021/08/11 00:34:15 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:34:15 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2021/08/11 00:34:16 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:34:16 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2021/08/11 00:34:18 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:34:18 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2021/08/11 00:34:22 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:34:22 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2021/08/11 00:34:30 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:34:35 [DEBUG] GET http://192.168.49.2:5000
2021/08/11 00:34:35 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:34:35 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2021/08/11 00:34:36 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:34:36 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2021/08/11 00:34:38 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:34:38 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2021/08/11 00:34:42 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:34:42 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2021/08/11 00:34:50 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:34:53 [DEBUG] GET http://192.168.49.2:5000
2021/08/11 00:34:53 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:34:53 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2021/08/11 00:34:54 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:34:54 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2021/08/11 00:34:56 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:34:56 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2021/08/11 00:35:00 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:35:00 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2021/08/11 00:35:08 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:35:17 [DEBUG] GET http://192.168.49.2:5000
2021/08/11 00:35:17 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:35:17 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2021/08/11 00:35:18 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:35:18 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2021/08/11 00:35:20 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:35:20 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2021/08/11 00:35:24 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/11 00:35:24 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2021/08/11 00:35:32 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
addons_test.go:339: failed to check external access to http://192.168.49.2:5000: GET http://192.168.49.2:5000 giving up after 5 attempt(s): Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
addons_test.go:342: (dbg) Run:  out/minikube-linux-arm64 -p addons-20210811003021-1387367 addons disable registry --alsologtostderr -v=1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect addons-20210811003021-1387367
helpers_test.go:236: (dbg) docker inspect addons-20210811003021-1387367:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5aa46682b7745ea0fa910d163361711d9eb6c9cfab6072314bf6857be01b2120",
	        "Created": "2021-08-11T00:30:25.788956339Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1388276,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-11T00:30:26.269675899Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ba5ae658d5b3f017bdb597cc46a1912d5eed54239e31b777788d204fdcbc4445",
	        "ResolvConfPath": "/var/lib/docker/containers/5aa46682b7745ea0fa910d163361711d9eb6c9cfab6072314bf6857be01b2120/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5aa46682b7745ea0fa910d163361711d9eb6c9cfab6072314bf6857be01b2120/hostname",
	        "HostsPath": "/var/lib/docker/containers/5aa46682b7745ea0fa910d163361711d9eb6c9cfab6072314bf6857be01b2120/hosts",
	        "LogPath": "/var/lib/docker/containers/5aa46682b7745ea0fa910d163361711d9eb6c9cfab6072314bf6857be01b2120/5aa46682b7745ea0fa910d163361711d9eb6c9cfab6072314bf6857be01b2120-json.log",
	        "Name": "/addons-20210811003021-1387367",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-20210811003021-1387367:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-20210811003021-1387367",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/d7bbc23e4363abd5f8bd0174d71334b755b7a86aae7b28260349c3000af1c495-init/diff:/var/lib/docker/overlay2/b901673749d4c23cf617379d66c43acbc184f898f580a05fca5568725e6ccb6a/diff:/var/lib/docker/overlay2/3fd19ee2c9d46b2cdb8a592d42d57d9efdba3a556c98f5018ae07caa15606bc4/diff:/var/lib/docker/overlay2/31f547e426e6dfa6ed65e0b7cb851c18e771f23a77868552685aacb2e126dc0a/diff:/var/lib/docker/overlay2/6ae53b304b800757235653c63c7879ae7f05b4d4f0400f7f6fadc53e2059aa5a/diff:/var/lib/docker/overlay2/7702d6ed068e8b454dd11af18cb8cb76986898926e3e3130c2d7f638062de9ee/diff:/var/lib/docker/overlay2/e67b0ce82f4d6c092698530106fa38495aa54b2fe5600ac022386a3d17165948/diff:/var/lib/docker/overlay2/d3ddbdbbe88f3c5a0867637eeb78a22790daa833a6179cdd4690044007911336/diff:/var/lib/docker/overlay2/10c48536a5187dfe63f1c090ec32daef76e852de7cc4a7e7f96a2fa1510314cc/diff:/var/lib/docker/overlay2/2186c26bc131feb045ca64a28e2cc431fed76b32afc3d3587916b98a9af807fe/diff:/var/lib/docker/overlay2/292c9d
aaf6d60ee235c7ac65bfc1b61b9c0d360ebbebcf08ba5efeb1b40de075/diff:/var/lib/docker/overlay2/9bc521e84afeeb62fa312e9eb2afc367bc449dbf66f412e17eb2338f79d6f920/diff:/var/lib/docker/overlay2/b1a93cf97438f068af56026fc52aaa329c46e4cac3d8f91c8d692871adaf451a/diff:/var/lib/docker/overlay2/b8e42d5d9e69e72a11e3cad660b9f29335dfc6cd1b4a6aebdbf5e6f313efe749/diff:/var/lib/docker/overlay2/6a6eaef3ce06d941ce606aaebc530878ce54d24a51c7947ca936a3a6eb4dac16/diff:/var/lib/docker/overlay2/62370bd2a6e35ce796647f79ccf9906147c91e8ceee31e401bdb7842371c6bee/diff:/var/lib/docker/overlay2/e673dacc1c6815100340b85af47aeb90eb5fca87778caec1d728de5b8cc9a36e/diff:/var/lib/docker/overlay2/bd17ea1d8cd8e2f88bd7fb4cee8a097365f6b81efc91f203a0504873fc0916a6/diff:/var/lib/docker/overlay2/d2f15007a2a5c037903647e5dd0d6882903fa163d23087bbd8eadeaf3618377b/diff:/var/lib/docker/overlay2/0bbc7fe1b1d62a2db9b4f402e6bc8781815951ae6df608307fd50a2fde242253/diff:/var/lib/docker/overlay2/d124fa0a0ea67ad0362eec0adf1f3e7cbd885b2cf4c31f83e917d97a09a791af/diff:/var/lib/d
ocker/overlay2/ee74e2f91490ecb544a95b306f1001046f3c4656413878d09be8bf67de7b4c4f/diff:/var/lib/docker/overlay2/4279b3790ea6aeb262c4ecd9cf4aae5beb1430f4fbb599b49ff27d0f7b3a9714/diff:/var/lib/docker/overlay2/b7fd6a0c88249dbf5e233463fbe08559ca287465617e7721977a002204ea3af5/diff:/var/lib/docker/overlay2/c495a83eeda1cf6df33d49341ee01f15738845e6330c0a5b3c29e11fdc4733b0/diff:/var/lib/docker/overlay2/ac747f0260d49943953568bbbe150f3a4f28d70bd82f40d0485ef13b12195044/diff:/var/lib/docker/overlay2/aa98d62ac831ecd60bc1acfa1708c0648c306bb7fa187026b472e9ae5c3364a4/diff:/var/lib/docker/overlay2/34829b132a53df856a1be03aa46565640e20cb075db18bd9775a5055fe0c0b22/diff:/var/lib/docker/overlay2/85a074fe6f79f3ea9d8b2f628355f41bb4f73b398257f8b6659bc171d86a0736/diff:/var/lib/docker/overlay2/c8c145d2e68e655880cd5c8fae8cb9f7cbd6b112f1f64fced224b17d4f60fbc7/diff:/var/lib/docker/overlay2/7480ad16aa2479be3569dd07eca685bc3a37a785e7ff281c448c7ca718cc67c3/diff:/var/lib/docker/overlay2/519f1304b1b8ee2daf8c1b9411f3e46d4fedacc8d6446937321372c4e8d
f2cb9/diff:/var/lib/docker/overlay2/246fcb20bef1dbfdc41186d1b7143566cd571a067830cc3f946b232024c2e85c/diff:/var/lib/docker/overlay2/f5f15e6d497abc56d9a2d901ed821a56e6f3effe2fc8d6c3ef64297faea15179/diff:/var/lib/docker/overlay2/3aa1fb1105e860c53ef63317f6757f9629a4a20f35764d976df2b0f0cee5d4f2/diff:/var/lib/docker/overlay2/765f7cba41acbb266d2cef89f2a76a5659b78c3b075223bf23257ac44acfe177/diff:/var/lib/docker/overlay2/53179410fe05d9ddea0a22ba2c123ca8e75f9c7839c2a64902e411e2bda2de23/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d7bbc23e4363abd5f8bd0174d71334b755b7a86aae7b28260349c3000af1c495/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d7bbc23e4363abd5f8bd0174d71334b755b7a86aae7b28260349c3000af1c495/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d7bbc23e4363abd5f8bd0174d71334b755b7a86aae7b28260349c3000af1c495/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-20210811003021-1387367",
	                "Source": "/var/lib/docker/volumes/addons-20210811003021-1387367/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-20210811003021-1387367",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-20210811003021-1387367",
	                "name.minikube.sigs.k8s.io": "addons-20210811003021-1387367",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "17fbfc5588d01a72b28ff1d6c58d2e4bb8f2d21449a18677b10dd71b3b83ded4",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50250"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50249"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50246"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50248"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50247"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/17fbfc5588d0",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-20210811003021-1387367": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "5aa46682b774",
	                        "addons-20210811003021-1387367"
	                    ],
	                    "NetworkID": "6dba5b957173120a4aafdf3873eab586b4a4a9b5791668afbe348cef17103048",
	                    "EndpointID": "d99047e7dbe6428356a66d026486a85ca7cdfff3ea6f120c69d9470809fd105b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-20210811003021-1387367 -n addons-20210811003021-1387367
helpers_test.go:245: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p addons-20210811003021-1387367 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p addons-20210811003021-1387367 logs -n 25: (1.572386229s)
helpers_test.go:253: TestAddons/parallel/Registry logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                  Args                  |                Profile                 |  User   | Version |          Start Time           |           End Time            |
	|---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | --all                                  | download-only-20210811002935-1387367   | jenkins | v1.22.0 | Wed, 11 Aug 2021 00:30:07 UTC | Wed, 11 Aug 2021 00:30:07 UTC |
	| delete  | -p                                     | download-only-20210811002935-1387367   | jenkins | v1.22.0 | Wed, 11 Aug 2021 00:30:07 UTC | Wed, 11 Aug 2021 00:30:07 UTC |
	|         | download-only-20210811002935-1387367   |                                        |         |         |                               |                               |
	| delete  | -p                                     | download-only-20210811002935-1387367   | jenkins | v1.22.0 | Wed, 11 Aug 2021 00:30:07 UTC | Wed, 11 Aug 2021 00:30:08 UTC |
	|         | download-only-20210811002935-1387367   |                                        |         |         |                               |                               |
	| delete  | -p                                     | download-docker-20210811003008-1387367 | jenkins | v1.22.0 | Wed, 11 Aug 2021 00:30:21 UTC | Wed, 11 Aug 2021 00:30:21 UTC |
	|         | download-docker-20210811003008-1387367 |                                        |         |         |                               |                               |
	| start   | -p                                     | addons-20210811003021-1387367          | jenkins | v1.22.0 | Wed, 11 Aug 2021 00:30:21 UTC | Wed, 11 Aug 2021 00:32:41 UTC |
	|         | addons-20210811003021-1387367          |                                        |         |         |                               |                               |
	|         | --wait=true --memory=4000              |                                        |         |         |                               |                               |
	|         | --alsologtostderr                      |                                        |         |         |                               |                               |
	|         | --addons=registry                      |                                        |         |         |                               |                               |
	|         | --addons=metrics-server                |                                        |         |         |                               |                               |
	|         | --addons=olm                           |                                        |         |         |                               |                               |
	|         | --addons=volumesnapshots               |                                        |         |         |                               |                               |
	|         | --addons=csi-hostpath-driver           |                                        |         |         |                               |                               |
	|         | --driver=docker                        |                                        |         |         |                               |                               |
	|         | --container-runtime=docker             |                                        |         |         |                               |                               |
	|         | --addons=ingress                       |                                        |         |         |                               |                               |
	|         | --addons=gcp-auth                      |                                        |         |         |                               |                               |
	| -p      | addons-20210811003021-1387367          | addons-20210811003021-1387367          | jenkins | v1.22.0 | Wed, 11 Aug 2021 00:32:54 UTC | Wed, 11 Aug 2021 00:32:54 UTC |
	|         | ip                                     |                                        |         |         |                               |                               |
	| -p      | addons-20210811003021-1387367          | addons-20210811003021-1387367          | jenkins | v1.22.0 | Wed, 11 Aug 2021 00:35:32 UTC | Wed, 11 Aug 2021 00:35:32 UTC |
	|         | addons disable registry                |                                        |         |         |                               |                               |
	|         | --alsologtostderr -v=1                 |                                        |         |         |                               |                               |
	|---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/11 00:30:21
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.16.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0811 00:30:21.602659 1387850 out.go:298] Setting OutFile to fd 1 ...
	I0811 00:30:21.602845 1387850 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0811 00:30:21.602855 1387850 out.go:311] Setting ErrFile to fd 2...
	I0811 00:30:21.602859 1387850 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0811 00:30:21.603002 1387850 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/bin
	I0811 00:30:21.603313 1387850 out.go:305] Setting JSON to false
	I0811 00:30:21.604120 1387850 start.go:111] hostinfo: {"hostname":"ip-172-31-31-251","uptime":36768,"bootTime":1628605053,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0811 00:30:21.604207 1387850 start.go:121] virtualization:  
	I0811 00:30:21.607468 1387850 out.go:177] * [addons-20210811003021-1387367] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	I0811 00:30:21.611463 1387850 out.go:177]   - MINIKUBE_LOCATION=12230
	I0811 00:30:21.609464 1387850 notify.go:169] Checking for updates...
	I0811 00:30:21.615278 1387850 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	I0811 00:30:21.618400 1387850 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube
	I0811 00:30:21.621705 1387850 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0811 00:30:21.621941 1387850 driver.go:335] Setting default libvirt URI to qemu:///system
	I0811 00:30:21.658581 1387850 docker.go:132] docker version: linux-20.10.8
	I0811 00:30:21.658691 1387850 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0811 00:30:21.762135 1387850 info.go:263] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:46 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:34 SystemTime:2021-08-11 00:30:21.69832939 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226263040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInf
o:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0811 00:30:21.762295 1387850 docker.go:244] overlay module found
	I0811 00:30:21.764908 1387850 out.go:177] * Using the docker driver based on user configuration
	I0811 00:30:21.764929 1387850 start.go:278] selected driver: docker
	I0811 00:30:21.764934 1387850 start.go:751] validating driver "docker" against <nil>
	I0811 00:30:21.764951 1387850 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0811 00:30:21.765000 1387850 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0811 00:30:21.765023 1387850 out.go:242] ! Your cgroup does not allow setting memory.
	I0811 00:30:21.767459 1387850 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0811 00:30:21.767848 1387850 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0811 00:30:21.854139 1387850 info.go:263] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:46 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:34 SystemTime:2021-08-11 00:30:21.794641916 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226263040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0811 00:30:21.854262 1387850 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0811 00:30:21.854419 1387850 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0811 00:30:21.854442 1387850 cni.go:93] Creating CNI manager for ""
	I0811 00:30:21.854449 1387850 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0811 00:30:21.854458 1387850 start_flags.go:277] config:
	{Name:addons-20210811003021-1387367 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:addons-20210811003021-1387367 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket
: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0811 00:30:21.856879 1387850 out.go:177] * Starting control plane node addons-20210811003021-1387367 in cluster addons-20210811003021-1387367
	I0811 00:30:21.856928 1387850 cache.go:117] Beginning downloading kic base image for docker with docker
	I0811 00:30:21.858843 1387850 out.go:177] * Pulling base image ...
	I0811 00:30:21.858881 1387850 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime docker
	I0811 00:30:21.858920 1387850 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-docker-overlay2-arm64.tar.lz4
	I0811 00:30:21.858936 1387850 cache.go:56] Caching tarball of preloaded images
	I0811 00:30:21.859099 1387850 preload.go:173] Found /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0811 00:30:21.859124 1387850 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on docker
	I0811 00:30:21.859416 1387850 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/config.json ...
	I0811 00:30:21.859452 1387850 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/config.json: {Name:mkad62a8ef7b1cb9eac286f0a4233efc658a409a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 00:30:21.859624 1387850 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	I0811 00:30:21.914689 1387850 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0811 00:30:21.914718 1387850 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0811 00:30:21.914731 1387850 cache.go:205] Successfully downloaded all kic artifacts
	I0811 00:30:21.914776 1387850 start.go:313] acquiring machines lock for addons-20210811003021-1387367: {Name:mk226548caa021fe6ed2b9069936448c3d09f345 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0811 00:30:21.914932 1387850 start.go:317] acquired machines lock for "addons-20210811003021-1387367" in 132.463µs
	I0811 00:30:21.914971 1387850 start.go:89] Provisioning new machine with config: &{Name:addons-20210811003021-1387367 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:addons-20210811003021-1387367 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0811 00:30:21.915061 1387850 start.go:126] createHost starting for "" (driver="docker")
	I0811 00:30:21.917526 1387850 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0811 00:30:21.917773 1387850 start.go:160] libmachine.API.Create for "addons-20210811003021-1387367" (driver="docker")
	I0811 00:30:21.917815 1387850 client.go:168] LocalClient.Create starting
	I0811 00:30:21.917923 1387850 main.go:130] libmachine: Creating CA: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem
	I0811 00:30:22.339798 1387850 main.go:130] libmachine: Creating client certificate: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem
	I0811 00:30:22.974163 1387850 cli_runner.go:115] Run: docker network inspect addons-20210811003021-1387367 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0811 00:30:23.003309 1387850 cli_runner.go:162] docker network inspect addons-20210811003021-1387367 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0811 00:30:23.003391 1387850 network_create.go:255] running [docker network inspect addons-20210811003021-1387367] to gather additional debugging logs...
	I0811 00:30:23.003413 1387850 cli_runner.go:115] Run: docker network inspect addons-20210811003021-1387367
	W0811 00:30:23.032304 1387850 cli_runner.go:162] docker network inspect addons-20210811003021-1387367 returned with exit code 1
	I0811 00:30:23.032336 1387850 network_create.go:258] error running [docker network inspect addons-20210811003021-1387367]: docker network inspect addons-20210811003021-1387367: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: addons-20210811003021-1387367
	I0811 00:30:23.032348 1387850 network_create.go:260] output of [docker network inspect addons-20210811003021-1387367]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: addons-20210811003021-1387367
	
	** /stderr **
	I0811 00:30:23.032405 1387850 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0811 00:30:23.062238 1387850 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0x40000d7398] misses:0}
	I0811 00:30:23.062294 1387850 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0811 00:30:23.062314 1387850 network_create.go:106] attempt to create docker network addons-20210811003021-1387367 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0811 00:30:23.062373 1387850 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20210811003021-1387367
	I0811 00:30:23.131311 1387850 network_create.go:90] docker network addons-20210811003021-1387367 192.168.49.0/24 created
	I0811 00:30:23.131341 1387850 kic.go:106] calculated static IP "192.168.49.2" for the "addons-20210811003021-1387367" container
	I0811 00:30:23.131409 1387850 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0811 00:30:23.160364 1387850 cli_runner.go:115] Run: docker volume create addons-20210811003021-1387367 --label name.minikube.sigs.k8s.io=addons-20210811003021-1387367 --label created_by.minikube.sigs.k8s.io=true
	I0811 00:30:23.190804 1387850 oci.go:102] Successfully created a docker volume addons-20210811003021-1387367
	I0811 00:30:23.190897 1387850 cli_runner.go:115] Run: docker run --rm --name addons-20210811003021-1387367-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20210811003021-1387367 --entrypoint /usr/bin/test -v addons-20210811003021-1387367:/var gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -d /var/lib
	I0811 00:30:25.611528 1387850 cli_runner.go:168] Completed: docker run --rm --name addons-20210811003021-1387367-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20210811003021-1387367 --entrypoint /usr/bin/test -v addons-20210811003021-1387367:/var gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -d /var/lib: (2.420589766s)
	I0811 00:30:25.611562 1387850 oci.go:106] Successfully prepared a docker volume addons-20210811003021-1387367
	W0811 00:30:25.611598 1387850 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0811 00:30:25.611608 1387850 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0811 00:30:25.611675 1387850 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0811 00:30:25.611691 1387850 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime docker
	I0811 00:30:25.611714 1387850 kic.go:179] Starting extracting preloaded images to volume ...
	I0811 00:30:25.611770 1387850 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-20210811003021-1387367:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir
	I0811 00:30:25.746101 1387850 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-20210811003021-1387367 --name addons-20210811003021-1387367 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20210811003021-1387367 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-20210811003021-1387367 --network addons-20210811003021-1387367 --ip 192.168.49.2 --volume addons-20210811003021-1387367:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79
	I0811 00:30:26.279482 1387850 cli_runner.go:115] Run: docker container inspect addons-20210811003021-1387367 --format={{.State.Running}}
	I0811 00:30:26.347407 1387850 cli_runner.go:115] Run: docker container inspect addons-20210811003021-1387367 --format={{.State.Status}}
	I0811 00:30:26.400431 1387850 cli_runner.go:115] Run: docker exec addons-20210811003021-1387367 stat /var/lib/dpkg/alternatives/iptables
	I0811 00:30:26.499917 1387850 oci.go:278] the created container "addons-20210811003021-1387367" has a running status.
	I0811 00:30:26.499948 1387850 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210811003021-1387367/id_rsa...
	I0811 00:30:26.732383 1387850 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210811003021-1387367/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0811 00:30:26.881674 1387850 cli_runner.go:115] Run: docker container inspect addons-20210811003021-1387367 --format={{.State.Status}}
	I0811 00:30:26.918020 1387850 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0811 00:30:26.918042 1387850 kic_runner.go:115] Args: [docker exec --privileged addons-20210811003021-1387367 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0811 00:30:35.641601 1387850 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-20210811003021-1387367:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir: (10.02979324s)
	I0811 00:30:35.641632 1387850 kic.go:188] duration metric: took 10.029915 seconds to extract preloaded images to volume
	I0811 00:30:35.641709 1387850 cli_runner.go:115] Run: docker container inspect addons-20210811003021-1387367 --format={{.State.Status}}
	I0811 00:30:35.681545 1387850 machine.go:88] provisioning docker machine ...
	I0811 00:30:35.681590 1387850 ubuntu.go:169] provisioning hostname "addons-20210811003021-1387367"
	I0811 00:30:35.681654 1387850 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210811003021-1387367
	I0811 00:30:35.724584 1387850 main.go:130] libmachine: Using SSH client type: native
	I0811 00:30:35.724791 1387850 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil>  [] 0s} 127.0.0.1 50250 <nil> <nil>}
	I0811 00:30:35.724811 1387850 main.go:130] libmachine: About to run SSH command:
	sudo hostname addons-20210811003021-1387367 && echo "addons-20210811003021-1387367" | sudo tee /etc/hostname
	I0811 00:30:35.855478 1387850 main.go:130] libmachine: SSH cmd err, output: <nil>: addons-20210811003021-1387367
	
	I0811 00:30:35.855550 1387850 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210811003021-1387367
	I0811 00:30:35.892128 1387850 main.go:130] libmachine: Using SSH client type: native
	I0811 00:30:35.892309 1387850 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil>  [] 0s} 127.0.0.1 50250 <nil> <nil>}
	I0811 00:30:35.892335 1387850 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-20210811003021-1387367' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-20210811003021-1387367/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-20210811003021-1387367' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0811 00:30:36.016702 1387850 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0811 00:30:36.016728 1387850 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/k
ey.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube}
	I0811 00:30:36.016752 1387850 ubuntu.go:177] setting up certificates
	I0811 00:30:36.016760 1387850 provision.go:83] configureAuth start
	I0811 00:30:36.016819 1387850 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20210811003021-1387367
	I0811 00:30:36.046617 1387850 provision.go:137] copyHostCerts
	I0811 00:30:36.046706 1387850 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem (1082 bytes)
	I0811 00:30:36.046821 1387850 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem (1123 bytes)
	I0811 00:30:36.046895 1387850 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem (1679 bytes)
	I0811 00:30:36.046947 1387850 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem org=jenkins.addons-20210811003021-1387367 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-20210811003021-1387367]
	I0811 00:30:36.901481 1387850 provision.go:171] copyRemoteCerts
	I0811 00:30:36.901548 1387850 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0811 00:30:36.901597 1387850 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210811003021-1387367
	I0811 00:30:36.932010 1387850 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50250 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210811003021-1387367/id_rsa Username:docker}
	I0811 00:30:37.015797 1387850 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0811 00:30:37.032008 1387850 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem --> /etc/docker/server.pem (1261 bytes)
	I0811 00:30:37.048411 1387850 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0811 00:30:37.064819 1387850 provision.go:86] duration metric: configureAuth took 1.048044188s
	I0811 00:30:37.064842 1387850 ubuntu.go:193] setting minikube options for container-runtime
	I0811 00:30:37.065077 1387850 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210811003021-1387367
	I0811 00:30:37.094964 1387850 main.go:130] libmachine: Using SSH client type: native
	I0811 00:30:37.095136 1387850 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil>  [] 0s} 127.0.0.1 50250 <nil> <nil>}
	I0811 00:30:37.095153 1387850 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0811 00:30:37.212966 1387850 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0811 00:30:37.212986 1387850 ubuntu.go:71] root file system type: overlay
	I0811 00:30:37.213159 1387850 provision.go:308] Updating docker unit: /lib/systemd/system/docker.service ...
	I0811 00:30:37.213224 1387850 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210811003021-1387367
	I0811 00:30:37.243079 1387850 main.go:130] libmachine: Using SSH client type: native
	I0811 00:30:37.243251 1387850 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil>  [] 0s} 127.0.0.1 50250 <nil> <nil>}
	I0811 00:30:37.243366 1387850 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0811 00:30:37.365398 1387850 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0811 00:30:37.365479 1387850 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210811003021-1387367
	I0811 00:30:37.396410 1387850 main.go:130] libmachine: Using SSH client type: native
	I0811 00:30:37.396581 1387850 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil>  [] 0s} 127.0.0.1 50250 <nil> <nil>}
	I0811 00:30:37.396607 1387850 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0811 00:30:38.259628 1387850 main.go:130] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-06-02 11:55:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2021-08-11 00:30:37.360623318 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	+BindsTo=containerd.service
	 After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0811 00:30:38.259655 1387850 machine.go:91] provisioned docker machine in 2.578088023s
	I0811 00:30:38.259665 1387850 client.go:171] LocalClient.Create took 16.341840918s
	I0811 00:30:38.259674 1387850 start.go:168] duration metric: libmachine.API.Create for "addons-20210811003021-1387367" took 16.341902554s
	I0811 00:30:38.259682 1387850 start.go:267] post-start starting for "addons-20210811003021-1387367" (driver="docker")
	I0811 00:30:38.259696 1387850 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0811 00:30:38.259758 1387850 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0811 00:30:38.259813 1387850 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210811003021-1387367
	I0811 00:30:38.298448 1387850 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50250 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210811003021-1387367/id_rsa Username:docker}
	I0811 00:30:38.384125 1387850 ssh_runner.go:149] Run: cat /etc/os-release
	I0811 00:30:38.386661 1387850 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0811 00:30:38.386687 1387850 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0811 00:30:38.386698 1387850 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0811 00:30:38.386705 1387850 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0811 00:30:38.386715 1387850 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/addons for local assets ...
	I0811 00:30:38.386779 1387850 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files for local assets ...
	I0811 00:30:38.386806 1387850 start.go:270] post-start completed in 127.109195ms
	I0811 00:30:38.387133 1387850 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20210811003021-1387367
	I0811 00:30:38.416894 1387850 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/config.json ...
	I0811 00:30:38.417167 1387850 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0811 00:30:38.417220 1387850 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210811003021-1387367
	I0811 00:30:38.446953 1387850 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50250 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210811003021-1387367/id_rsa Username:docker}
	I0811 00:30:38.529083 1387850 start.go:129] duration metric: createHost completed in 16.614007292s
	I0811 00:30:38.529119 1387850 start.go:80] releasing machines lock for "addons-20210811003021-1387367", held for 16.614173157s
	I0811 00:30:38.529201 1387850 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20210811003021-1387367
	I0811 00:30:38.558592 1387850 ssh_runner.go:149] Run: systemctl --version
	I0811 00:30:38.558641 1387850 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210811003021-1387367
	I0811 00:30:38.558656 1387850 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0811 00:30:38.558720 1387850 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210811003021-1387367
	I0811 00:30:38.594358 1387850 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50250 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210811003021-1387367/id_rsa Username:docker}
	I0811 00:30:38.601093 1387850 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50250 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210811003021-1387367/id_rsa Username:docker}
	I0811 00:30:38.830574 1387850 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0811 00:30:38.840501 1387850 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0811 00:30:38.851219 1387850 cruntime.go:249] skipping containerd shutdown because we are bound to it
	I0811 00:30:38.851291 1387850 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0811 00:30:38.861277 1387850 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0811 00:30:38.874263 1387850 ssh_runner.go:149] Run: sudo systemctl unmask docker.service
	I0811 00:30:38.958499 1387850 ssh_runner.go:149] Run: sudo systemctl enable docker.socket
	I0811 00:30:39.047217 1387850 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0811 00:30:39.056705 1387850 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0811 00:30:39.146104 1387850 ssh_runner.go:149] Run: sudo systemctl start docker
	I0811 00:30:39.155707 1387850 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0811 00:30:39.205950 1387850 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0811 00:30:39.260548 1387850 out.go:204] * Preparing Kubernetes v1.21.3 on Docker 20.10.7 ...
	I0811 00:30:39.260677 1387850 cli_runner.go:115] Run: docker network inspect addons-20210811003021-1387367 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0811 00:30:39.290146 1387850 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0811 00:30:39.293407 1387850 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0811 00:30:39.302229 1387850 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime docker
	I0811 00:30:39.302303 1387850 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0811 00:30:39.341446 1387850 docker.go:535] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.21.3
	k8s.gcr.io/kube-controller-manager:v1.21.3
	k8s.gcr.io/kube-proxy:v1.21.3
	k8s.gcr.io/kube-scheduler:v1.21.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.4.1
	kubernetesui/dashboard:v2.1.0
	k8s.gcr.io/coredns/coredns:v1.8.0
	k8s.gcr.io/etcd:3.4.13-0
	kubernetesui/metrics-scraper:v1.0.4
	
	-- /stdout --
	I0811 00:30:39.341473 1387850 docker.go:466] Images already preloaded, skipping extraction
	I0811 00:30:39.341528 1387850 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0811 00:30:39.380996 1387850 docker.go:535] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.21.3
	k8s.gcr.io/kube-proxy:v1.21.3
	k8s.gcr.io/kube-controller-manager:v1.21.3
	k8s.gcr.io/kube-scheduler:v1.21.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.4.1
	kubernetesui/dashboard:v2.1.0
	k8s.gcr.io/coredns/coredns:v1.8.0
	k8s.gcr.io/etcd:3.4.13-0
	kubernetesui/metrics-scraper:v1.0.4
	
	-- /stdout --
	I0811 00:30:39.381035 1387850 cache_images.go:74] Images are preloaded, skipping loading
	I0811 00:30:39.381093 1387850 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}}
	I0811 00:30:39.515442 1387850 cni.go:93] Creating CNI manager for ""
	I0811 00:30:39.515466 1387850 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0811 00:30:39.515474 1387850 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0811 00:30:39.515487 1387850 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-20210811003021-1387367 NodeName:addons-20210811003021-1387367 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/
lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0811 00:30:39.515632 1387850 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "addons-20210811003021-1387367"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0811 00:30:39.515719 1387850 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=addons-20210811003021-1387367 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:addons-20210811003021-1387367 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0811 00:30:39.515790 1387850 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0811 00:30:39.524221 1387850 binaries.go:44] Found k8s binaries, skipping transfer
	I0811 00:30:39.524290 1387850 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0811 00:30:39.530941 1387850 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (355 bytes)
	I0811 00:30:39.543732 1387850 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0811 00:30:39.556462 1387850 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2072 bytes)
	I0811 00:30:39.568807 1387850 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0811 00:30:39.572672 1387850 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0811 00:30:39.581434 1387850 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367 for IP: 192.168.49.2
	I0811 00:30:39.581481 1387850 certs.go:183] generating minikubeCA CA: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key
	I0811 00:30:40.153609 1387850 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt ...
	I0811 00:30:40.153643 1387850 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt: {Name:mk59a57628b7830e6da9d2ae7e8c01cd5efde140 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 00:30:40.153894 1387850 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key ...
	I0811 00:30:40.153911 1387850 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key: {Name:mk96e056b1cd3dc0b43035730f08908c26c31fe8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 00:30:40.154044 1387850 certs.go:183] generating proxyClientCA CA: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key
	I0811 00:30:40.471227 1387850 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.crt ...
	I0811 00:30:40.471263 1387850 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.crt: {Name:mkfd778913fc3b0da592cfc8a7d08059e895c701 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 00:30:40.471472 1387850 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key ...
	I0811 00:30:40.471492 1387850 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key: {Name:mk0ce74341fb606236ed0d73a79e2c5cede7537d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 00:30:40.471637 1387850 certs.go:294] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/client.key
	I0811 00:30:40.471650 1387850 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/client.crt with IP's: []
	I0811 00:30:40.932035 1387850 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/client.crt ...
	I0811 00:30:40.932074 1387850 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/client.crt: {Name:mk9fa1e098b232414d6313e801fa75c86c1d49bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 00:30:40.932328 1387850 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/client.key ...
	I0811 00:30:40.932348 1387850 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/client.key: {Name:mkfe24cba1294c2a137e1fca2c7855f1633fb7e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 00:30:40.932465 1387850 certs.go:294] generating minikube signed cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/apiserver.key.dd3b5fb2
	I0811 00:30:40.932477 1387850 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0811 00:30:41.378481 1387850 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/apiserver.crt.dd3b5fb2 ...
	I0811 00:30:41.378518 1387850 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/apiserver.crt.dd3b5fb2: {Name:mk61de60fd373ccc807bd5cda384447d381e8be3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 00:30:41.378737 1387850 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/apiserver.key.dd3b5fb2 ...
	I0811 00:30:41.378752 1387850 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/apiserver.key.dd3b5fb2: {Name:mk28ad1051189a18b59148562d5150391e295b32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 00:30:41.378851 1387850 certs.go:305] copying /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/apiserver.crt
	I0811 00:30:41.378911 1387850 certs.go:309] copying /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/apiserver.key
	I0811 00:30:41.378968 1387850 certs.go:294] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/proxy-client.key
	I0811 00:30:41.378981 1387850 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/proxy-client.crt with IP's: []
	I0811 00:30:42.573038 1387850 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/proxy-client.crt ...
	I0811 00:30:42.573080 1387850 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/proxy-client.crt: {Name:mk0190b4814f268c32de2db03fd82b7d16622974 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 00:30:42.573306 1387850 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/proxy-client.key ...
	I0811 00:30:42.573323 1387850 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/proxy-client.key: {Name:mkb9c7131f1d68ca2e257df72147ba667f820217 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 00:30:42.573512 1387850 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem (1675 bytes)
	I0811 00:30:42.573555 1387850 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem (1082 bytes)
	I0811 00:30:42.573587 1387850 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem (1123 bytes)
	I0811 00:30:42.573617 1387850 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem (1679 bytes)
	I0811 00:30:42.574683 1387850 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0811 00:30:42.592943 1387850 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0811 00:30:42.609946 1387850 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0811 00:30:42.626691 1387850 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0811 00:30:42.643759 1387850 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0811 00:30:42.660446 1387850 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0811 00:30:42.677226 1387850 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0811 00:30:42.693943 1387850 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0811 00:30:42.711059 1387850 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0811 00:30:42.727916 1387850 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0811 00:30:42.740400 1387850 ssh_runner.go:149] Run: openssl version
	I0811 00:30:42.746610 1387850 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0811 00:30:42.755297 1387850 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0811 00:30:42.758347 1387850 certs.go:416] hashing: -rw-r--r-- 1 root root 1111 Aug 11 00:30 /usr/share/ca-certificates/minikubeCA.pem
	I0811 00:30:42.758400 1387850 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0811 00:30:42.763252 1387850 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0811 00:30:42.770353 1387850 kubeadm.go:390] StartCluster: {Name:addons-20210811003021-1387367 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:addons-20210811003021-1387367 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:
[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0811 00:30:42.770495 1387850 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0811 00:30:42.809002 1387850 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0811 00:30:42.816207 1387850 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0811 00:30:42.822961 1387850 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0811 00:30:42.823066 1387850 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0811 00:30:42.830328 1387850 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0811 00:30:42.830370 1387850 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0811 00:30:43.619917 1387850 out.go:204]   - Generating certificates and keys ...
	I0811 00:30:49.880691 1387850 out.go:204]   - Booting up control plane ...
	I0811 00:31:06.451215 1387850 out.go:204]   - Configuring RBAC rules ...
	I0811 00:31:06.874304 1387850 cni.go:93] Creating CNI manager for ""
	I0811 00:31:06.874325 1387850 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0811 00:31:06.874348 1387850 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0811 00:31:06.874455 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:31:06.874510 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=877a5691753f15214a0c269ac69dcdc5a4d99fcd minikube.k8s.io/name=addons-20210811003021-1387367 minikube.k8s.io/updated_at=2021_08_11T00_31_06_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:31:07.395364 1387850 ops.go:34] apiserver oom_adj: -16
	I0811 00:31:07.395478 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:31:07.985651 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:31:08.485872 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:31:08.985765 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:31:09.485151 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:31:09.985899 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:31:10.485129 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:31:10.985624 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:31:11.485105 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:31:11.985253 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:31:12.485152 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:31:12.985351 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:31:13.485134 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:31:13.986075 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:31:14.485781 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:31:14.985900 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:31:15.485861 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:31:15.986014 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:31:16.485778 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:31:16.985653 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:31:17.485947 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:31:17.985276 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:31:18.485896 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:31:18.985977 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:31:19.485799 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:31:19.985256 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:31:20.485459 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:31:20.663342 1387850 kubeadm.go:985] duration metric: took 13.78893335s to wait for elevateKubeSystemPrivileges.
	I0811 00:31:20.663367 1387850 kubeadm.go:392] StartCluster complete in 37.893022782s
	I0811 00:31:20.663382 1387850 settings.go:142] acquiring lock: {Name:mk6e7f1e95cc0d18801bf31166529399345d1e40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 00:31:20.663521 1387850 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	I0811 00:31:20.663950 1387850 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig: {Name:mka174137207b71bb699e0c641682c96161f87c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 00:31:21.189383 1387850 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "addons-20210811003021-1387367" rescaled to 1
	I0811 00:31:21.189462 1387850 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0811 00:31:21.193170 1387850 out.go:177] * Verifying Kubernetes components...
	I0811 00:31:21.193243 1387850 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0811 00:31:21.189583 1387850 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0811 00:31:21.189906 1387850 addons.go:342] enableAddons start: toEnable=map[], additional=[registry metrics-server olm volumesnapshots csi-hostpath-driver ingress gcp-auth]
	I0811 00:31:21.193403 1387850 addons.go:59] Setting volumesnapshots=true in profile "addons-20210811003021-1387367"
	I0811 00:31:21.193416 1387850 addons.go:135] Setting addon volumesnapshots=true in "addons-20210811003021-1387367"
	I0811 00:31:21.193441 1387850 host.go:66] Checking if "addons-20210811003021-1387367" exists ...
	I0811 00:31:21.193953 1387850 cli_runner.go:115] Run: docker container inspect addons-20210811003021-1387367 --format={{.State.Status}}
	I0811 00:31:21.194243 1387850 addons.go:59] Setting ingress=true in profile "addons-20210811003021-1387367"
	I0811 00:31:21.194260 1387850 addons.go:135] Setting addon ingress=true in "addons-20210811003021-1387367"
	I0811 00:31:21.194284 1387850 host.go:66] Checking if "addons-20210811003021-1387367" exists ...
	I0811 00:31:21.194705 1387850 cli_runner.go:115] Run: docker container inspect addons-20210811003021-1387367 --format={{.State.Status}}
	I0811 00:31:21.194767 1387850 addons.go:59] Setting csi-hostpath-driver=true in profile "addons-20210811003021-1387367"
	I0811 00:31:21.194790 1387850 addons.go:135] Setting addon csi-hostpath-driver=true in "addons-20210811003021-1387367"
	I0811 00:31:21.194811 1387850 host.go:66] Checking if "addons-20210811003021-1387367" exists ...
	I0811 00:31:21.195183 1387850 cli_runner.go:115] Run: docker container inspect addons-20210811003021-1387367 --format={{.State.Status}}
	I0811 00:31:21.195236 1387850 addons.go:59] Setting default-storageclass=true in profile "addons-20210811003021-1387367"
	I0811 00:31:21.195247 1387850 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-20210811003021-1387367"
	I0811 00:31:21.195465 1387850 cli_runner.go:115] Run: docker container inspect addons-20210811003021-1387367 --format={{.State.Status}}
	I0811 00:31:21.195519 1387850 addons.go:59] Setting gcp-auth=true in profile "addons-20210811003021-1387367"
	I0811 00:31:21.195539 1387850 mustload.go:65] Loading cluster: addons-20210811003021-1387367
	I0811 00:31:21.195857 1387850 cli_runner.go:115] Run: docker container inspect addons-20210811003021-1387367 --format={{.State.Status}}
	I0811 00:31:21.195907 1387850 addons.go:59] Setting olm=true in profile "addons-20210811003021-1387367"
	I0811 00:31:21.195915 1387850 addons.go:135] Setting addon olm=true in "addons-20210811003021-1387367"
	I0811 00:31:21.195931 1387850 host.go:66] Checking if "addons-20210811003021-1387367" exists ...
	I0811 00:31:21.196301 1387850 cli_runner.go:115] Run: docker container inspect addons-20210811003021-1387367 --format={{.State.Status}}
	I0811 00:31:21.196350 1387850 addons.go:59] Setting metrics-server=true in profile "addons-20210811003021-1387367"
	I0811 00:31:21.196358 1387850 addons.go:135] Setting addon metrics-server=true in "addons-20210811003021-1387367"
	I0811 00:31:21.196372 1387850 host.go:66] Checking if "addons-20210811003021-1387367" exists ...
	I0811 00:31:21.196738 1387850 cli_runner.go:115] Run: docker container inspect addons-20210811003021-1387367 --format={{.State.Status}}
	I0811 00:31:21.196790 1387850 addons.go:59] Setting registry=true in profile "addons-20210811003021-1387367"
	I0811 00:31:21.196797 1387850 addons.go:135] Setting addon registry=true in "addons-20210811003021-1387367"
	I0811 00:31:21.196812 1387850 host.go:66] Checking if "addons-20210811003021-1387367" exists ...
	I0811 00:31:21.197403 1387850 cli_runner.go:115] Run: docker container inspect addons-20210811003021-1387367 --format={{.State.Status}}
	I0811 00:31:21.197413 1387850 addons.go:59] Setting storage-provisioner=true in profile "addons-20210811003021-1387367"
	I0811 00:31:21.197526 1387850 addons.go:135] Setting addon storage-provisioner=true in "addons-20210811003021-1387367"
	W0811 00:31:21.197549 1387850 addons.go:147] addon storage-provisioner should already be in state true
	I0811 00:31:21.197579 1387850 host.go:66] Checking if "addons-20210811003021-1387367" exists ...
	I0811 00:31:21.198079 1387850 cli_runner.go:115] Run: docker container inspect addons-20210811003021-1387367 --format={{.State.Status}}
	I0811 00:31:21.316737 1387850 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0811 00:31:21.318960 1387850 out.go:177]   - Using image k8s.gcr.io/ingress-nginx/controller:v0.44.0
	I0811 00:31:21.321029 1387850 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0811 00:31:21.321081 1387850 addons.go:275] installing /etc/kubernetes/addons/ingress-configmap.yaml
	I0811 00:31:21.321090 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-configmap.yaml (1865 bytes)
	I0811 00:31:21.321153 1387850 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210811003021-1387367
	I0811 00:31:21.393126 1387850 out.go:177]   - Using image k8s.gcr.io/sig-storage/snapshot-controller:v4.0.0
	I0811 00:31:21.393208 1387850 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0811 00:31:21.393219 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0811 00:31:21.393552 1387850 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210811003021-1387367
	I0811 00:31:21.566994 1387850 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-controller:v0.2.0
	I0811 00:31:21.570897 1387850 out.go:177]   - Using image k8s.gcr.io/sig-storage/hostpathplugin:v1.6.0
	I0811 00:31:21.573431 1387850 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0
	I0811 00:31:21.575941 1387850 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-attacher:v3.1.0
	I0811 00:31:21.577933 1387850 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1
	I0811 00:31:21.589807 1387850 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0
	I0811 00:31:21.590689 1387850 host.go:66] Checking if "addons-20210811003021-1387367" exists ...
	I0811 00:31:21.593774 1387850 addons.go:135] Setting addon default-storageclass=true in "addons-20210811003021-1387367"
	W0811 00:31:21.593807 1387850 addons.go:147] addon default-storageclass should already be in state true
	I0811 00:31:21.593832 1387850 host.go:66] Checking if "addons-20210811003021-1387367" exists ...
	I0811 00:31:21.594305 1387850 cli_runner.go:115] Run: docker container inspect addons-20210811003021-1387367 --format={{.State.Status}}
	I0811 00:31:21.594464 1387850 out.go:177]   - Using image k8s.gcr.io/sig-storage/livenessprobe:v2.2.0
	I0811 00:31:21.602078 1387850 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-resizer:v1.1.0
	I0811 00:31:21.594805 1387850 out.go:177]   - Using image quay.io/operator-framework/upstream-community-operators:07bbc13
	I0811 00:31:21.613391 1387850 out.go:177]   - Using image quay.io/operator-framework/olm:v0.17.0
	I0811 00:31:21.610752 1387850 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-agent:v0.2.0
	I0811 00:31:21.634506 1387850 addons.go:275] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0811 00:31:21.634522 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0811 00:31:21.634580 1387850 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210811003021-1387367
	I0811 00:31:21.610760 1387850 out.go:177]   - Using image k8s.gcr.io/metrics-server/metrics-server:v0.4.2
	I0811 00:31:21.635648 1387850 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0811 00:31:21.635657 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0811 00:31:21.635701 1387850 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210811003021-1387367
	I0811 00:31:21.644033 1387850 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0811 00:31:21.644148 1387850 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0811 00:31:21.644157 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0811 00:31:21.644215 1387850 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210811003021-1387367
	I0811 00:31:21.659467 1387850 out.go:177]   - Using image registry:2.7.1
	I0811 00:31:21.663767 1387850 out.go:177]   - Using image gcr.io/google_containers/kube-registry-proxy:0.4
	I0811 00:31:21.665207 1387850 addons.go:275] installing /etc/kubernetes/addons/registry-rc.yaml
	I0811 00:31:21.665236 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (788 bytes)
	I0811 00:31:21.665321 1387850 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210811003021-1387367
	I0811 00:31:21.715375 1387850 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50250 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210811003021-1387367/id_rsa Username:docker}
	I0811 00:31:21.740338 1387850 ssh_runner.go:316] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0811 00:31:21.747616 1387850 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210811003021-1387367
	I0811 00:31:21.767963 1387850 addons.go:275] installing /etc/kubernetes/addons/crds.yaml
	I0811 00:31:21.767996 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/crds.yaml (825331 bytes)
	I0811 00:31:21.768070 1387850 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210811003021-1387367
	I0811 00:31:21.821081 1387850 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50250 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210811003021-1387367/id_rsa Username:docker}
	I0811 00:31:21.893062 1387850 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0811 00:31:21.894732 1387850 node_ready.go:35] waiting up to 6m0s for node "addons-20210811003021-1387367" to be "Ready" ...
	I0811 00:31:21.900813 1387850 node_ready.go:49] node "addons-20210811003021-1387367" has status "Ready":"True"
	I0811 00:31:21.900877 1387850 node_ready.go:38] duration metric: took 6.121847ms waiting for node "addons-20210811003021-1387367" to be "Ready" ...
	I0811 00:31:21.900891 1387850 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0811 00:31:21.970648 1387850 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-5wk4c" in "kube-system" namespace to be "Ready" ...
	I0811 00:31:21.971138 1387850 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50250 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210811003021-1387367/id_rsa Username:docker}
	I0811 00:31:21.988186 1387850 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50250 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210811003021-1387367/id_rsa Username:docker}
	I0811 00:31:21.998543 1387850 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50250 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210811003021-1387367/id_rsa Username:docker}
	I0811 00:31:22.022093 1387850 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50250 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210811003021-1387367/id_rsa Username:docker}
	I0811 00:31:22.024598 1387850 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50250 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210811003021-1387367/id_rsa Username:docker}
	I0811 00:31:22.025389 1387850 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0811 00:31:22.025403 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0811 00:31:22.025452 1387850 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210811003021-1387367
	I0811 00:31:22.089084 1387850 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50250 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210811003021-1387367/id_rsa Username:docker}
	I0811 00:31:22.145093 1387850 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50250 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210811003021-1387367/id_rsa Username:docker}
	I0811 00:31:22.189558 1387850 addons.go:275] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0811 00:31:22.189581 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0811 00:31:22.302897 1387850 addons.go:275] installing /etc/kubernetes/addons/ingress-rbac.yaml
	I0811 00:31:22.302958 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-rbac.yaml (6005 bytes)
	I0811 00:31:22.410765 1387850 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0811 00:31:22.422948 1387850 addons.go:275] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0811 00:31:22.423015 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0811 00:31:22.426280 1387850 ssh_runner.go:316] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0811 00:31:22.432260 1387850 addons.go:275] installing /etc/kubernetes/addons/olm.yaml
	I0811 00:31:22.432318 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/olm.yaml (9882 bytes)
	I0811 00:31:22.436077 1387850 addons.go:275] installing /etc/kubernetes/addons/registry-svc.yaml
	I0811 00:31:22.436129 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0811 00:31:22.444025 1387850 addons.go:135] Setting addon gcp-auth=true in "addons-20210811003021-1387367"
	I0811 00:31:22.444083 1387850 host.go:66] Checking if "addons-20210811003021-1387367" exists ...
	I0811 00:31:22.444621 1387850 cli_runner.go:115] Run: docker container inspect addons-20210811003021-1387367 --format={{.State.Status}}
	I0811 00:31:22.494171 1387850 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0811 00:31:22.494196 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1931 bytes)
	I0811 00:31:22.507387 1387850 out.go:177]   - Using image jettech/kube-webhook-certgen:v1.3.0
	I0811 00:31:22.509963 1387850 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.0.6
	I0811 00:31:22.510021 1387850 addons.go:275] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0811 00:31:22.510031 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0811 00:31:22.510090 1387850 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210811003021-1387367
	I0811 00:31:22.537941 1387850 addons.go:275] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0811 00:31:22.537962 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19584 bytes)
	I0811 00:31:22.559492 1387850 addons.go:275] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0811 00:31:22.559512 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3428 bytes)
	I0811 00:31:22.566425 1387850 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50250 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210811003021-1387367/id_rsa Username:docker}
	I0811 00:31:22.567873 1387850 addons.go:275] installing /etc/kubernetes/addons/ingress-dp.yaml
	I0811 00:31:22.567891 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-dp.yaml (9394 bytes)
	I0811 00:31:22.624805 1387850 addons.go:275] installing /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml
	I0811 00:31:22.624829 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml (2203 bytes)
	I0811 00:31:22.720992 1387850 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml
	I0811 00:31:22.724130 1387850 addons.go:275] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0811 00:31:22.724148 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1071 bytes)
	I0811 00:31:22.727176 1387850 addons.go:275] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0811 00:31:22.727192 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (950 bytes)
	I0811 00:31:22.729946 1387850 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0811 00:31:22.764118 1387850 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0811 00:31:22.764137 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0811 00:31:22.774485 1387850 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/ingress-configmap.yaml -f /etc/kubernetes/addons/ingress-rbac.yaml -f /etc/kubernetes/addons/ingress-dp.yaml
	I0811 00:31:22.812352 1387850 addons.go:275] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0811 00:31:22.812417 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3037 bytes)
	I0811 00:31:22.870611 1387850 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0811 00:31:22.917662 1387850 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0811 00:31:22.917720 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0811 00:31:22.943400 1387850 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0811 00:31:23.037187 1387850 addons.go:275] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0811 00:31:23.037246 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (3666 bytes)
	I0811 00:31:23.100781 1387850 addons.go:275] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0811 00:31:23.100840 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (770 bytes)
	I0811 00:31:23.124682 1387850 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0811 00:31:23.217383 1387850 addons.go:275] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0811 00:31:23.217443 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2944 bytes)
	I0811 00:31:23.268107 1387850 addons.go:275] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0811 00:31:23.268163 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (4755 bytes)
	I0811 00:31:23.349221 1387850 addons.go:275] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0811 00:31:23.349241 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3194 bytes)
	I0811 00:31:23.433746 1387850 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0811 00:31:23.466200 1387850 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0811 00:31:23.466266 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2421 bytes)
	I0811 00:31:23.568633 1387850 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0811 00:31:23.568690 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1034 bytes)
	I0811 00:31:23.750358 1387850 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0811 00:31:23.750414 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (6710 bytes)
	I0811 00:31:23.791988 1387850 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.898894924s)
	I0811 00:31:23.792052 1387850 start.go:736] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0811 00:31:23.955311 1387850 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-provisioner.yaml
	I0811 00:31:23.955374 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-provisioner.yaml (2555 bytes)
	I0811 00:31:23.956419 1387850 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.545593968s)
	I0811 00:31:24.082471 1387850 pod_ready.go:102] pod "coredns-558bd4d5db-5wk4c" in "kube-system" namespace has status "Ready":"False"
	I0811 00:31:24.160978 1387850 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0811 00:31:24.161066 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2469 bytes)
	I0811 00:31:24.203405 1387850 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml
	I0811 00:31:24.203429 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml (2555 bytes)
	I0811 00:31:24.411828 1387850 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0811 00:31:24.411854 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0811 00:31:24.432074 1387850 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-provisioner.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0811 00:31:26.152298 1387850 pod_ready.go:102] pod "coredns-558bd4d5db-5wk4c" in "kube-system" namespace has status "Ready":"False"
	I0811 00:31:28.575273 1387850 pod_ready.go:102] pod "coredns-558bd4d5db-5wk4c" in "kube-system" namespace has status "Ready":"False"
	I0811 00:31:31.053360 1387850 pod_ready.go:102] pod "coredns-558bd4d5db-5wk4c" in "kube-system" namespace has status "Ready":"False"
	I0811 00:31:31.587406 1387850 pod_ready.go:97] error getting pod "coredns-558bd4d5db-5wk4c" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-5wk4c" not found
	I0811 00:31:31.587437 1387850 pod_ready.go:81] duration metric: took 9.616760181s waiting for pod "coredns-558bd4d5db-5wk4c" in "kube-system" namespace to be "Ready" ...
	E0811 00:31:31.587449 1387850 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-558bd4d5db-5wk4c" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-5wk4c" not found
	I0811 00:31:31.587458 1387850 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-j4xjh" in "kube-system" namespace to be "Ready" ...
	I0811 00:31:31.759414 1387850 pod_ready.go:92] pod "coredns-558bd4d5db-j4xjh" in "kube-system" namespace has status "Ready":"True"
	I0811 00:31:31.759439 1387850 pod_ready.go:81] duration metric: took 171.972167ms waiting for pod "coredns-558bd4d5db-j4xjh" in "kube-system" namespace to be "Ready" ...
	I0811 00:31:31.759450 1387850 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-20210811003021-1387367" in "kube-system" namespace to be "Ready" ...
	I0811 00:31:31.874331 1387850 pod_ready.go:92] pod "etcd-addons-20210811003021-1387367" in "kube-system" namespace has status "Ready":"True"
	I0811 00:31:31.874356 1387850 pod_ready.go:81] duration metric: took 114.898034ms waiting for pod "etcd-addons-20210811003021-1387367" in "kube-system" namespace to be "Ready" ...
	I0811 00:31:31.874369 1387850 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-20210811003021-1387367" in "kube-system" namespace to be "Ready" ...
	I0811 00:31:31.877164 1387850 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.147157564s)
	I0811 00:31:31.877240 1387850 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: (9.156226911s)
	W0811 00:31:31.877276 1387850 addons.go:296] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created
	namespace/olm created
	namespace/operators created
	serviceaccount/olm-operator-serviceaccount created
	clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager created
	clusterrolebinding.rbac.authorization.k8s.io/olm-operator-binding-olm created
	deployment.apps/olm-operator created
	deployment.apps/catalog-operator created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-view created
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "ClusterServiceVersion" in version "operators.coreos.com/v1alpha1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "CatalogSource" in version "operators.coreos.com/v1alpha1"
	I0811 00:31:31.877292 1387850 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created
	namespace/olm created
	namespace/operators created
	serviceaccount/olm-operator-serviceaccount created
	clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager created
	clusterrolebinding.rbac.authorization.k8s.io/olm-operator-binding-olm created
	deployment.apps/olm-operator created
	deployment.apps/catalog-operator created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-view created
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "ClusterServiceVersion" in version "operators.coreos.com/v1alpha1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "CatalogSource" in version "operators.coreos.com/v1alpha1"
	I0811 00:31:31.877402 1387850 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/ingress-configmap.yaml -f /etc/kubernetes/addons/ingress-rbac.yaml -f /etc/kubernetes/addons/ingress-dp.yaml: (9.102897402s)
	I0811 00:31:31.877416 1387850 addons.go:313] Verifying addon ingress=true in "addons-20210811003021-1387367"
	I0811 00:31:31.887205 1387850 out.go:177] * Verifying ingress addon...
	I0811 00:31:31.889037 1387850 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0811 00:31:31.877748 1387850 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (9.007067811s)
	W0811 00:31:31.889240 1387850 addons.go:296] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: unable to recognize "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	I0811 00:31:31.889258 1387850 retry.go:31] will retry after 360.127272ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: unable to recognize "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	I0811 00:31:31.877785 1387850 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.934328602s)
	I0811 00:31:31.889280 1387850 addons.go:313] Verifying addon registry=true in "addons-20210811003021-1387367"
	I0811 00:31:31.877850 1387850 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.753144936s)
	I0811 00:31:31.877905 1387850 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (8.444139497s)
	I0811 00:31:31.878122 1387850 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-provisioner.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.446019981s)
	I0811 00:31:31.893287 1387850 addons.go:313] Verifying addon metrics-server=true in "addons-20210811003021-1387367"
	I0811 00:31:31.893306 1387850 out.go:177] * Verifying registry addon...
	I0811 00:31:31.894964 1387850 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0811 00:31:31.895140 1387850 addons.go:313] Verifying addon gcp-auth=true in "addons-20210811003021-1387367"
	I0811 00:31:31.897634 1387850 out.go:177] * Verifying gcp-auth addon...
	I0811 00:31:31.899263 1387850 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0811 00:31:31.893256 1387850 addons.go:313] Verifying addon csi-hostpath-driver=true in "addons-20210811003021-1387367"
	I0811 00:31:31.902262 1387850 out.go:177] * Verifying csi-hostpath-driver addon...
	I0811 00:31:31.903868 1387850 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0811 00:31:31.957269 1387850 pod_ready.go:92] pod "kube-apiserver-addons-20210811003021-1387367" in "kube-system" namespace has status "Ready":"True"
	I0811 00:31:31.957291 1387850 pod_ready.go:81] duration metric: took 82.914978ms waiting for pod "kube-apiserver-addons-20210811003021-1387367" in "kube-system" namespace to be "Ready" ...
	I0811 00:31:31.957302 1387850 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-20210811003021-1387367" in "kube-system" namespace to be "Ready" ...
	I0811 00:31:31.969760 1387850 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0811 00:31:31.969785 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:31.973452 1387850 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0811 00:31:31.973479 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 00:31:31.973901 1387850 kapi.go:86] Found 2 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0811 00:31:31.973912 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:31.974638 1387850 kapi.go:86] Found 5 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0811 00:31:31.974650 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:31.983912 1387850 pod_ready.go:92] pod "kube-controller-manager-addons-20210811003021-1387367" in "kube-system" namespace has status "Ready":"True"
	I0811 00:31:31.983934 1387850 pod_ready.go:81] duration metric: took 26.622888ms waiting for pod "kube-controller-manager-addons-20210811003021-1387367" in "kube-system" namespace to be "Ready" ...
	I0811 00:31:31.983947 1387850 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hbv8p" in "kube-system" namespace to be "Ready" ...
	I0811 00:31:31.999660 1387850 pod_ready.go:92] pod "kube-proxy-hbv8p" in "kube-system" namespace has status "Ready":"True"
	I0811 00:31:31.999681 1387850 pod_ready.go:81] duration metric: took 15.72646ms waiting for pod "kube-proxy-hbv8p" in "kube-system" namespace to be "Ready" ...
	I0811 00:31:31.999692 1387850 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-20210811003021-1387367" in "kube-system" namespace to be "Ready" ...
	I0811 00:31:32.134974 1387850 pod_ready.go:92] pod "kube-scheduler-addons-20210811003021-1387367" in "kube-system" namespace has status "Ready":"True"
	I0811 00:31:32.134996 1387850 pod_ready.go:81] duration metric: took 135.293862ms waiting for pod "kube-scheduler-addons-20210811003021-1387367" in "kube-system" namespace to be "Ready" ...
	I0811 00:31:32.135007 1387850 pod_ready.go:38] duration metric: took 10.234102984s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0811 00:31:32.135022 1387850 api_server.go:50] waiting for apiserver process to appear ...
	I0811 00:31:32.135065 1387850 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0811 00:31:32.157221 1387850 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml
	I0811 00:31:32.250282 1387850 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0811 00:31:32.278457 1387850 api_server.go:70] duration metric: took 11.088961421s to wait for apiserver process to appear ...
	I0811 00:31:32.278478 1387850 api_server.go:86] waiting for apiserver healthz status ...
	I0811 00:31:32.278488 1387850 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0811 00:31:32.307214 1387850 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0811 00:31:32.308374 1387850 api_server.go:139] control plane version: v1.21.3
	I0811 00:31:32.308394 1387850 api_server.go:129] duration metric: took 29.908897ms to wait for apiserver health ...
	I0811 00:31:32.308401 1387850 system_pods.go:43] waiting for kube-system pods to appear ...
	I0811 00:31:32.326005 1387850 system_pods.go:59] 17 kube-system pods found
	I0811 00:31:32.326040 1387850 system_pods.go:61] "coredns-558bd4d5db-j4xjh" [f948b5ac-414e-4239-ad46-497ef8f75853] Running
	I0811 00:31:32.326045 1387850 system_pods.go:61] "csi-hostpath-attacher-0" [2f0d8b28-ddaf-458b-b5b5-8b3c07c09415] Pending
	I0811 00:31:32.326050 1387850 system_pods.go:61] "csi-hostpath-provisioner-0" [1ec66ec1-bc43-458c-aec5-9987f687ac44] Pending
	I0811 00:31:32.326055 1387850 system_pods.go:61] "csi-hostpath-resizer-0" [79bc0e72-5889-4c3f-8670-8c2c53610472] Pending
	I0811 00:31:32.326060 1387850 system_pods.go:61] "csi-hostpath-snapshotter-0" [adee0893-0da6-42b1-b77a-115426aeb95d] Pending
	I0811 00:31:32.326065 1387850 system_pods.go:61] "csi-hostpathplugin-0" [6c1cecb2-45cd-41c0-b435-d9d52972488e] Pending
	I0811 00:31:32.326070 1387850 system_pods.go:61] "etcd-addons-20210811003021-1387367" [66a09e0e-6be7-443c-8a42-6f5c84c19094] Running
	I0811 00:31:32.326076 1387850 system_pods.go:61] "kube-apiserver-addons-20210811003021-1387367" [9691ed48-418f-4dad-8ac3-30d61a430bbf] Running
	I0811 00:31:32.326085 1387850 system_pods.go:61] "kube-controller-manager-addons-20210811003021-1387367" [3b729013-1dc6-4788-9f3c-f7aa402e59e1] Running
	I0811 00:31:32.326089 1387850 system_pods.go:61] "kube-proxy-hbv8p" [368541dc-ff39-4aee-af59-de331b32e889] Running
	I0811 00:31:32.326099 1387850 system_pods.go:61] "kube-scheduler-addons-20210811003021-1387367" [44340cdb-fad4-460c-994c-cf7586c7cb72] Running
	I0811 00:31:32.326106 1387850 system_pods.go:61] "metrics-server-77c99ccb96-7bz4t" [f135d883-ab80-4dd8-a141-333424152bcb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0811 00:31:32.326114 1387850 system_pods.go:61] "registry-dzdlw" [4a872b2d-a2b1-46f9-9afd-c52b6647383f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0811 00:31:32.326124 1387850 system_pods.go:61] "registry-proxy-xfrxz" [19d31762-bc36-413f-8533-e97b57d38a28] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0811 00:31:32.326132 1387850 system_pods.go:61] "snapshot-controller-989f9ddc8-f8q5j" [b992001c-a1c6-4425-b360-98696726a82a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0811 00:31:32.326144 1387850 system_pods.go:61] "snapshot-controller-989f9ddc8-pjvmj" [9502385f-ad82-4081-bc88-a44d574dad9c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0811 00:31:32.326150 1387850 system_pods.go:61] "storage-provisioner" [f5eb4d07-1355-48c6-aa1c-17031e9d86b9] Running
	I0811 00:31:32.326160 1387850 system_pods.go:74] duration metric: took 17.753408ms to wait for pod list to return data ...
	I0811 00:31:32.326168 1387850 default_sa.go:34] waiting for default service account to be created ...
	I0811 00:31:32.473443 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:32.493205 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:32.493584 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 00:31:32.494370 1387850 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0811 00:31:32.494390 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:32.510135 1387850 default_sa.go:45] found service account: "default"
	I0811 00:31:32.510161 1387850 default_sa.go:55] duration metric: took 183.984313ms for default service account to be created ...
	I0811 00:31:32.510170 1387850 system_pods.go:116] waiting for k8s-apps to be running ...
	I0811 00:31:32.777565 1387850 system_pods.go:86] 17 kube-system pods found
	I0811 00:31:32.777597 1387850 system_pods.go:89] "coredns-558bd4d5db-j4xjh" [f948b5ac-414e-4239-ad46-497ef8f75853] Running
	I0811 00:31:32.777605 1387850 system_pods.go:89] "csi-hostpath-attacher-0" [2f0d8b28-ddaf-458b-b5b5-8b3c07c09415] Pending
	I0811 00:31:32.777610 1387850 system_pods.go:89] "csi-hostpath-provisioner-0" [1ec66ec1-bc43-458c-aec5-9987f687ac44] Pending
	I0811 00:31:32.777615 1387850 system_pods.go:89] "csi-hostpath-resizer-0" [79bc0e72-5889-4c3f-8670-8c2c53610472] Pending
	I0811 00:31:32.777620 1387850 system_pods.go:89] "csi-hostpath-snapshotter-0" [adee0893-0da6-42b1-b77a-115426aeb95d] Pending
	I0811 00:31:32.777629 1387850 system_pods.go:89] "csi-hostpathplugin-0" [6c1cecb2-45cd-41c0-b435-d9d52972488e] Pending
	I0811 00:31:32.777634 1387850 system_pods.go:89] "etcd-addons-20210811003021-1387367" [66a09e0e-6be7-443c-8a42-6f5c84c19094] Running
	I0811 00:31:32.777645 1387850 system_pods.go:89] "kube-apiserver-addons-20210811003021-1387367" [9691ed48-418f-4dad-8ac3-30d61a430bbf] Running
	I0811 00:31:32.777652 1387850 system_pods.go:89] "kube-controller-manager-addons-20210811003021-1387367" [3b729013-1dc6-4788-9f3c-f7aa402e59e1] Running
	I0811 00:31:32.777661 1387850 system_pods.go:89] "kube-proxy-hbv8p" [368541dc-ff39-4aee-af59-de331b32e889] Running
	I0811 00:31:32.777666 1387850 system_pods.go:89] "kube-scheduler-addons-20210811003021-1387367" [44340cdb-fad4-460c-994c-cf7586c7cb72] Running
	I0811 00:31:32.777680 1387850 system_pods.go:89] "metrics-server-77c99ccb96-7bz4t" [f135d883-ab80-4dd8-a141-333424152bcb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0811 00:31:32.777693 1387850 system_pods.go:89] "registry-dzdlw" [4a872b2d-a2b1-46f9-9afd-c52b6647383f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0811 00:31:32.777708 1387850 system_pods.go:89] "registry-proxy-xfrxz" [19d31762-bc36-413f-8533-e97b57d38a28] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0811 00:31:32.777716 1387850 system_pods.go:89] "snapshot-controller-989f9ddc8-f8q5j" [b992001c-a1c6-4425-b360-98696726a82a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0811 00:31:32.777724 1387850 system_pods.go:89] "snapshot-controller-989f9ddc8-pjvmj" [9502385f-ad82-4081-bc88-a44d574dad9c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0811 00:31:32.777730 1387850 system_pods.go:89] "storage-provisioner" [f5eb4d07-1355-48c6-aa1c-17031e9d86b9] Running
	I0811 00:31:32.777737 1387850 system_pods.go:126] duration metric: took 267.562785ms to wait for k8s-apps to be running ...
	I0811 00:31:32.777744 1387850 system_svc.go:44] waiting for kubelet service to be running ....
	I0811 00:31:32.777795 1387850 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0811 00:31:33.022664 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:33.043137 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:33.043695 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 00:31:33.076210 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:33.475631 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:33.487013 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 00:31:33.487474 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:33.488225 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:33.973986 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:33.978066 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:33.978643 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 00:31:33.982551 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:34.475328 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:34.484737 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:34.485574 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 00:31:34.491047 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:34.978754 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:34.986378 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:35.002282 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:35.003252 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 00:31:35.492406 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:35.498168 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:35.498801 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 00:31:35.507714 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:35.849572 1387850 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: (3.692315583s)
	I0811 00:31:35.849751 1387850 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.599438997s)
	I0811 00:31:35.849801 1387850 ssh_runner.go:189] Completed: sudo systemctl is-active --quiet service kubelet: (3.07199149s)
	I0811 00:31:35.849823 1387850 system_svc.go:56] duration metric: took 3.072076781s WaitForService to wait for kubelet.
	I0811 00:31:35.849853 1387850 kubeadm.go:547] duration metric: took 14.66035318s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0811 00:31:35.849889 1387850 node_conditions.go:102] verifying NodePressure condition ...
	I0811 00:31:35.856082 1387850 node_conditions.go:122] node storage ephemeral capacity is 60796312Ki
	I0811 00:31:35.856156 1387850 node_conditions.go:123] node cpu capacity is 2
	I0811 00:31:35.856183 1387850 node_conditions.go:105] duration metric: took 6.277447ms to run NodePressure ...
	I0811 00:31:35.856204 1387850 start.go:231] waiting for startup goroutines ...
	I0811 00:31:36.005256 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:36.013661 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:36.014789 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 00:31:36.015952 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:36.473926 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:36.483249 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:36.491648 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 00:31:36.492571 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:36.974069 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:36.986531 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:37.006446 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 00:31:37.013357 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:37.489649 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 00:31:37.490803 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:37.491434 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:37.504790 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:37.975170 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:37.989580 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 00:31:37.989802 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:38.025303 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:38.474918 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:38.492730 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 00:31:38.496663 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:38.497970 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:38.973085 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:38.978997 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 00:31:38.979735 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:38.982227 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:39.474799 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:39.481528 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:39.481921 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 00:31:39.484317 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:39.972804 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:39.978286 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 00:31:39.979517 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:39.981143 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:40.474715 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:40.481427 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:40.488665 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 00:31:40.494651 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:40.976036 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:40.980427 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:40.983056 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 00:31:40.987033 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:41.476447 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:41.479669 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:41.480016 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 00:31:41.483594 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:41.973617 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:41.978089 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 00:31:41.982266 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:41.984663 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:42.472736 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:42.487219 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 00:31:42.492985 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:42.493947 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:42.973539 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:42.982946 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:42.986923 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:42.987907 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 00:31:43.473857 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:43.479414 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 00:31:43.481641 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:43.483870 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:43.973517 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:43.982133 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:43.983164 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:43.983422 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 00:31:44.473837 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:44.479785 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:44.484353 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 00:31:44.488415 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:44.979994 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:44.985076 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:44.986656 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 00:31:44.992396 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:45.473966 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:45.481107 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:45.481941 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:45.487107 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 00:31:45.988284 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:46.002665 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 00:31:46.002780 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:46.007448 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:46.474794 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:46.482191 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:46.485676 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:46.487225 1387850 kapi.go:108] duration metric: took 14.592259009s to wait for kubernetes.io/minikube-addons=registry ...
	I0811 00:31:46.974885 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:46.981478 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:46.989618 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:47.487399 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:47.491281 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:47.492579 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:47.975251 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:47.985471 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:47.986668 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:48.490136 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:48.490788 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:48.506411 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:48.973392 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:48.977674 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:48.981056 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:49.474082 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:49.481091 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:49.485354 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:49.973955 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:49.996161 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:49.997622 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:50.475171 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:50.524381 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:50.525120 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:50.974322 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:50.982138 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:50.982860 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:51.474535 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:51.479680 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:51.480476 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:51.973249 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:51.978600 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:51.981854 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:52.473226 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:52.477902 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:52.482802 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:52.973434 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:52.978777 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:52.980477 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:53.473319 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:53.477756 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:53.487033 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:53.973687 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:53.978698 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:53.981280 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:54.475089 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:54.481790 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:54.482384 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:54.985792 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:54.988767 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:54.990896 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:55.473392 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:55.477987 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:55.481815 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:55.973541 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:55.977681 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:55.980675 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:56.474324 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:56.481953 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:56.484326 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:56.974064 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:56.982020 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:56.982410 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:57.473755 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:57.481929 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:57.482893 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:57.973431 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:57.982154 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:57.982719 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:58.473822 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:58.481555 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:58.482077 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:58.973492 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:58.979950 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:58.981978 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:59.473887 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:59.477935 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:59.481649 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:59.972952 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:59.982204 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:59.986360 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:32:00.473567 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:32:00.480713 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:32:00.483313 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:00.973469 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:32:00.977297 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:00.980898 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:32:01.495843 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:32:01.499200 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:01.504461 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:32:01.973397 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:32:01.977703 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:01.981521 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:32:02.473108 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:32:02.480024 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:32:02.480860 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:02.973624 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:32:02.978280 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:02.981042 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:32:03.473799 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:32:03.480741 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:32:03.481368 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:03.972770 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:32:03.983282 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:03.984249 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:32:04.491095 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:32:04.492784 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:04.492970 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:32:04.974758 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:32:04.984471 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:32:04.985330 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:05.472850 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:32:05.477703 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:05.482839 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:32:05.973192 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:32:05.978328 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:05.980518 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:32:06.473106 1387850 kapi.go:108] duration metric: took 34.573837309s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0811 00:32:06.475346 1387850 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-20210811003021-1387367 cluster.
	I0811 00:32:06.477505 1387850 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0811 00:32:06.479544 1387850 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0811 00:32:06.481981 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:32:06.487657 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:06.981307 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:32:06.981745 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:07.485847 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:07.488036 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:32:07.980464 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:07.982199 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:32:08.479056 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:08.487132 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:32:08.983438 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:08.989743 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:32:09.478278 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:09.482809 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:32:09.977227 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:09.981531 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:32:10.479714 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:10.480748 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:32:10.980089 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:32:10.980855 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:11.479066 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:11.480977 1387850 kapi.go:108] duration metric: took 39.577106963s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0811 00:32:11.978497 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:12.477434 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:12.977568 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:13.478968 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:13.978087 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:14.478651 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:14.978215 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:15.479057 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:15.978520 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:16.478308 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:16.977654 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:17.478970 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:17.978263 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:18.479065 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:18.978518 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:19.477635 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:19.978219 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:20.482858 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:20.978313 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:21.479093 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:21.977637 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:22.477774 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:22.977780 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:23.478605 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:23.977808 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:24.477511 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:24.977509 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:25.480776 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:25.978310 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:26.478828 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:26.978037 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:27.478852 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:27.978490 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:28.485097 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:28.978352 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:29.482810 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:29.978509 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:30.478065 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:30.978093 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:31.478736 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:31.978713 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:32.478985 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:32.977984 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:33.478992 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:33.978850 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:34.482336 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:34.978545 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:35.478663 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:35.978031 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:36.478559 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:36.979038 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:37.478422 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:37.978166 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:38.478873 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:38.977654 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:39.482663 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:39.977950 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:40.482296 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:40.978478 1387850 kapi.go:108] duration metric: took 1m9.089435631s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0811 00:32:40.981216 1387850 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, olm, volumesnapshots, registry, gcp-auth, csi-hostpath-driver, ingress
	I0811 00:32:40.981241 1387850 addons.go:344] enableAddons completed in 1m19.791344476s
	I0811 00:32:41.039327 1387850 start.go:462] kubectl: 1.21.3, cluster: 1.21.3 (minor skew: 0)
	I0811 00:32:41.041912 1387850 out.go:177] * Done! kubectl is now configured to use "addons-20210811003021-1387367" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2021-08-11 00:30:27 UTC, end at Wed 2021-08-11 00:35:34 UTC. --
	Aug 11 00:32:01 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:32:01.620796151Z" level=info msg="ignoring event" container=9247acdc5d64f443b68db1cc6df58dd5e5f4feaffb68b6da6ba21b7bd4ab39b7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 00:32:02 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:32:02.097502833Z" level=info msg="ignoring event" container=e9f14f481ab19b6c6e7291aa61d196c93e741d023142bb277d525f7eafba2af7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 00:32:02 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:32:02.253946601Z" level=info msg="ignoring event" container=62ab535b11128e12d593bf48f375e4900a6d8cfe79458696ce2f1101199f7e2e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 00:32:02 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:32:02.812654117Z" level=info msg="ignoring event" container=db4d9e368f2580d69f88a161fa33362694084315575f1a0a21aeaf98413c7581 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 00:32:03 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:32:03.147865444Z" level=info msg="ignoring event" container=3c1b9bd41a420e396cc6afafa5fda82d26f9322d71f8ea1e6a339e9620d7018f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 00:32:03 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:32:03.535036967Z" level=warning msg="reference for unknown type: " digest="sha256:c407ad6ee97d8a0e8a21c713e2d9af66aaf73315e4a123874c00b786f962f3cd" remote="gcr.io/k8s-minikube/gcp-auth-webhook@sha256:c407ad6ee97d8a0e8a21c713e2d9af66aaf73315e4a123874c00b786f962f3cd"
	Aug 11 00:32:05 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:32:05.363269927Z" level=warning msg="reference for unknown type: " digest="sha256:e07f914c32f0505e4c470a62a40ee43f84cbf8dc46ff861f31b14457ccbad108" remote="k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:e07f914c32f0505e4c470a62a40ee43f84cbf8dc46ff861f31b14457ccbad108"
	Aug 11 00:32:07 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:32:07.285998850Z" level=warning msg="reference for unknown type: " digest="sha256:b526bd29630261eceecf2d38c84d4f340a424d57e1e2661111e2649a4663b659" remote="k8s.gcr.io/sig-storage/hostpathplugin@sha256:b526bd29630261eceecf2d38c84d4f340a424d57e1e2661111e2649a4663b659"
	Aug 11 00:32:09 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:32:09.310317276Z" level=warning msg="reference for unknown type: " digest="sha256:48da0e4ed7238ad461ea05f68c25921783c37b315f21a5c5a2780157a6460994" remote="k8s.gcr.io/sig-storage/livenessprobe@sha256:48da0e4ed7238ad461ea05f68c25921783c37b315f21a5c5a2780157a6460994"
	Aug 11 00:32:15 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:32:15.725559728Z" level=info msg="ignoring event" container=df2492023ba1fcfd3bbc222fde5b8a84637a986c090f7faa5c80fc0697623c10 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 00:32:16 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:32:16.755442422Z" level=info msg="ignoring event" container=73446dad2ab7fc3a8445c107bec81ade8dd2693ce1b4982bfe8cbe9ccfb1b801 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 00:32:27 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:32:27.741112515Z" level=info msg="ignoring event" container=edfbb95e4087c049db7afcb9e1f8ce0508d820fd630038e8da3e4c5529efa58e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 00:32:33 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:32:33.357170162Z" level=warning msg="reference for unknown type: " digest="sha256:3dd0fac48073beaca2d67a78c746c7593f9c575168a17139a9955a82c63c4b9a" remote="k8s.gcr.io/ingress-nginx/controller@sha256:3dd0fac48073beaca2d67a78c746c7593f9c575168a17139a9955a82c63c4b9a"
	Aug 11 00:32:49 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:32:49.730895493Z" level=info msg="ignoring event" container=56c8e32523e1b87b7f33c103a15d47ff2f319a0571ea580c5aef97731015362a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 00:32:51 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:32:51.770304489Z" level=info msg="ignoring event" container=e9eeac65f955c313cbc23a77e5764104ecaf110d3cdeebf1b41e5f20aa8d8d05 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 00:32:53 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:32:53.264806699Z" level=info msg="ignoring event" container=5c39d034cf49e0969331d75b84592596fb5aa5c3b12ccc45db3a7e8d7080dea7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 00:32:53 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:32:53.993111992Z" level=info msg="ignoring event" container=25973087bf63929228da7b9f1c9d158ea8900ab97176337e251a3438f1d147f5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 00:33:20 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:33:20.739065582Z" level=info msg="ignoring event" container=f138a65ea0d8768aa9d6fd38db702cebb478c50a5499ad0d1aa321cc28d8aa55 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 00:33:37 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:33:37.754803103Z" level=info msg="ignoring event" container=5314100ae30c55c2c5fca50b59180db59f7d0052b4c4d71500579f8559290a48 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 00:33:44 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:33:44.728487709Z" level=info msg="ignoring event" container=38c2a91f09c071cbeb4b3668c2ebd6a57f5b9c1dfdc3b138a383795cc803f634 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 00:34:49 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:34:49.744487100Z" level=info msg="ignoring event" container=b14e7fada5642b72a77355be3587194eca52b65c2e16b1a9d03d4a29ec8ff73c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 00:35:01 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:35:01.733119773Z" level=info msg="ignoring event" container=93d05e6a4fdeadc429b3b8680409bcc95a4911306bd7c468e2c22e8baba6c554 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 00:35:14 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:35:14.725639809Z" level=info msg="ignoring event" container=92a9c313b50af3db83df40b9e3de0bb7efa59a28ae8f1cc94597b359047e2d81 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 00:35:32 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:35:32.869368899Z" level=info msg="ignoring event" container=716be14bfe61f9911006241516a4ba5a00a7f3a116d4cfb5970e74c919b456a0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 00:35:32 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:35:32.945418515Z" level=info msg="ignoring event" container=521d909af58b2ca6b97ebc302833c3719f1cdf051b0e5a8342fb46c49a87d86c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                                   CREATED             STATE               NAME                                     ATTEMPT             POD ID
	92a9c313b50af       d544402579747                                                                                                                           20 seconds ago      Exited              olm-operator                             5                   239321d9715a9
	93d05e6a4fdea       d544402579747                                                                                                                           33 seconds ago      Exited              catalog-operator                         5                   dc27d55e9b2e5
	b14e7fada5642       60dc18151daf8                                                                                                                           45 seconds ago      Exited              registry-proxy                           5                   ad962c09e2ff7
	383628dc34c7f       k8s.gcr.io/ingress-nginx/controller@sha256:3dd0fac48073beaca2d67a78c746c7593f9c575168a17139a9955a82c63c4b9a                             2 minutes ago       Running             controller                               0                   87f9e6f6e10e0
	f9d910b0983cb       k8s.gcr.io/sig-storage/livenessprobe@sha256:48da0e4ed7238ad461ea05f68c25921783c37b315f21a5c5a2780157a6460994                            3 minutes ago       Running             liveness-probe                           0                   8e312aa6ef3e1
	86c4bb5905f03       k8s.gcr.io/sig-storage/hostpathplugin@sha256:b526bd29630261eceecf2d38c84d4f340a424d57e1e2661111e2649a4663b659                           3 minutes ago       Running             hostpath                                 0                   8e312aa6ef3e1
	e6aa1f5da5206       k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:e07f914c32f0505e4c470a62a40ee43f84cbf8dc46ff861f31b14457ccbad108                3 minutes ago       Running             node-driver-registrar                    0                   8e312aa6ef3e1
	c54d1e7369442       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:c407ad6ee97d8a0e8a21c713e2d9af66aaf73315e4a123874c00b786f962f3cd                            3 minutes ago       Running             gcp-auth                                 0                   a7b60bbf33b9e
	3b180fbf110d5       k8s.gcr.io/sig-storage/csi-external-health-monitor-controller@sha256:14988b598a180cc0282f3f4bc982371baf9a9c9b80878fb385f8ae8bd04ecf16   3 minutes ago       Running             csi-external-health-monitor-controller   0                   8e312aa6ef3e1
	62ab535b11128       a883f7fc35610                                                                                                                           3 minutes ago       Exited              patch                                    1                   3c1b9bd41a420
	9247acdc5d64f       jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7                                    3 minutes ago       Exited              create                                   0                   e9f14f481ab19
	f491b4ee0dd1e       k8s.gcr.io/sig-storage/csi-attacher@sha256:50c3cfd458fc8e0bf3c8c521eac39172009382fc66dc5044a330d137c6ed0b09                             3 minutes ago       Running             csi-attacher                             0                   55d0941342052
	ee41f2c71eaef       k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a                              3 minutes ago       Running             csi-resizer                              0                   b83b8d797e576
	51cfceedd8f38       k8s.gcr.io/sig-storage/csi-snapshotter@sha256:51f2dfde5bccac7854b3704689506aeecfb793328427b91115ba253a93e60782                          3 minutes ago       Running             csi-snapshotter                          0                   6c7f1ea7004a5
	8792838a02368       k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2                          3 minutes ago       Running             csi-provisioner                          0                   bc4b5104a1fa8
	e8bdb4e95a016       k8s.gcr.io/sig-storage/csi-external-health-monitor-agent@sha256:c20d4a4772599e68944452edfcecc944a1df28c19e94b942d526ca25a522ea02        3 minutes ago       Running             csi-external-health-monitor-agent        0                   8e312aa6ef3e1
	34d13b67bdbb1       622522dfd285b                                                                                                                           3 minutes ago       Exited              patch                                    1                   59f9d2cea4a05
	86a9debc99cc3       jettech/kube-webhook-certgen@sha256:ff01fba91131ed260df3f3793009efbf9686f5a5ce78a85f81c386a4403f7689                                    3 minutes ago       Exited              create                                   0                   68356fc459748
	e8470704d7993       k8s.gcr.io/sig-storage/snapshot-controller@sha256:00fcc441ea9f72899c25eed61d602272a2a58c5f0014332bdcb5ac24acef08e4                      3 minutes ago       Running             volume-snapshot-controller               0                   a188b55eb2300
	29e27695dc868       k8s.gcr.io/sig-storage/snapshot-controller@sha256:00fcc441ea9f72899c25eed61d602272a2a58c5f0014332bdcb5ac24acef08e4                      3 minutes ago       Running             volume-snapshot-controller               0                   9abc8197d88f7
	849a40fa43175       k8s.gcr.io/metrics-server/metrics-server@sha256:dbc33d7d35d2a9cc5ab402005aa7a0d13be6192f3550c7d42cba8d2d5e3a5d62                        3 minutes ago       Running             metrics-server                           0                   f7efddac80035
	0d6ae04912a61       ba04bb24b9575                                                                                                                           4 minutes ago       Running             storage-provisioner                      0                   c19163d31a596
	4f14ad2dc9238       1a1f05a2cd7c2                                                                                                                           4 minutes ago       Running             coredns                                  0                   f3126492d7db3
	3e17f7de9e8a2       4ea38350a1beb                                                                                                                           4 minutes ago       Running             kube-proxy                               0                   4393665d45427
	178036f64854a       cb310ff289d79                                                                                                                           4 minutes ago       Running             kube-controller-manager                  0                   7e5d403628742
	daa4bc492ed71       05b738aa1bc63                                                                                                                           4 minutes ago       Running             etcd                                     0                   5c7734c8acc19
	107ea2d3d596b       44a6d50ef170d                                                                                                                           4 minutes ago       Running             kube-apiserver                           0                   efd4677540c6b
	4f7326edc3cff       31a3b96cefc1e                                                                                                                           4 minutes ago       Running             kube-scheduler                           0                   367240f7e40a9
	
	* 
	* ==> coredns [4f14ad2dc923] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.0
	linux/arm64, go1.15.3, 054c9ae
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               addons-20210811003021-1387367
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-20210811003021-1387367
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=877a5691753f15214a0c269ac69dcdc5a4d99fcd
	                    minikube.k8s.io/name=addons-20210811003021-1387367
	                    minikube.k8s.io/updated_at=2021_08_11T00_31_06_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-20210811003021-1387367
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-20210811003021-1387367"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 11 Aug 2021 00:31:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-20210811003021-1387367
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 11 Aug 2021 00:35:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 11 Aug 2021 00:33:12 +0000   Wed, 11 Aug 2021 00:30:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 11 Aug 2021 00:33:12 +0000   Wed, 11 Aug 2021 00:30:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 11 Aug 2021 00:33:12 +0000   Wed, 11 Aug 2021 00:30:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 11 Aug 2021 00:33:12 +0000   Wed, 11 Aug 2021 00:31:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-20210811003021-1387367
	Capacity:
	  cpu:                2
	  ephemeral-storage:  60796312Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8033460Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  60796312Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8033460Ki
	  pods:               110
	System Info:
	  Machine ID:                 80c525a0c99c4bf099c0cbf9c365b032
	  System UUID:                7597b455-7869-476e-86a2-9b994506f601
	  Boot ID:                    dff2c102-a0cf-4fb0-a2ea-36617f3a3229
	  Kernel Version:             5.8.0-1041-aws
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.7
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (21 in total)
	  Namespace                   Name                                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                     ------------  ----------  ---------------  -------------  ---
	  gcp-auth                    gcp-auth-5954cc4898-vdwfq                                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m8s
	  ingress-nginx               ingress-nginx-controller-59b45fb494-tt28h                100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (1%!)(MISSING)        0 (0%!)(MISSING)         4m6s
	  kube-system                 coredns-558bd4d5db-j4xjh                                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     4m14s
	  kube-system                 csi-hostpath-attacher-0                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m3s
	  kube-system                 csi-hostpath-provisioner-0                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m3s
	  kube-system                 csi-hostpath-resizer-0                                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m3s
	  kube-system                 csi-hostpath-snapshotter-0                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m3s
	  kube-system                 csi-hostpathplugin-0                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m3s
	  kube-system                 etcd-addons-20210811003021-1387367                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         4m24s
	  kube-system                 kube-apiserver-addons-20210811003021-1387367             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m24s
	  kube-system                 kube-controller-manager-addons-20210811003021-1387367    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m24s
	  kube-system                 kube-proxy-hbv8p                                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m14s
	  kube-system                 kube-scheduler-addons-20210811003021-1387367             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m24s
	  kube-system                 metrics-server-77c99ccb96-7bz4t                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      300Mi (3%!)(MISSING)       0 (0%!)(MISSING)         4m9s
	  kube-system                 registry-dzdlw                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  kube-system                 registry-proxy-xfrxz                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  kube-system                 snapshot-controller-989f9ddc8-f8q5j                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  kube-system                 snapshot-controller-989f9ddc8-pjvmj                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  kube-system                 storage-provisioner                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m11s
	  olm                         catalog-operator-75d496484d-lftth                        10m (0%!)(MISSING)      0 (0%!)(MISSING)      80Mi (1%!)(MISSING)        0 (0%!)(MISSING)         4m3s
	  olm                         olm-operator-859c88c96-zfpv9                             10m (0%!)(MISSING)      0 (0%!)(MISSING)      160Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                970m (48%!)(MISSING)   0 (0%!)(MISSING)
	  memory             800Mi (10%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  4m39s (x5 over 4m40s)  kubelet     Node addons-20210811003021-1387367 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m39s (x4 over 4m40s)  kubelet     Node addons-20210811003021-1387367 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m39s (x4 over 4m40s)  kubelet     Node addons-20210811003021-1387367 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m24s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m24s                  kubelet     Node addons-20210811003021-1387367 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m24s                  kubelet     Node addons-20210811003021-1387367 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m24s                  kubelet     Node addons-20210811003021-1387367 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             4m24s                  kubelet     Node addons-20210811003021-1387367 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  4m24s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m14s                  kubelet     Node addons-20210811003021-1387367 status is now: NodeReady
	  Normal  Starting                 4m13s                  kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.001104] FS-Cache: O-key=[8] 'c762010000000000'
	[  +0.000863] FS-Cache: N-cookie c=000000006895995f [p=000000003cfe13d3 fl=2 nc=0 na=1]
	[  +0.001353] FS-Cache: N-cookie d=00000000d0f41ca1 n=0000000007d05ee7
	[  +0.001085] FS-Cache: N-key=[8] 'c762010000000000'
	[Aug10 23:20] FS-Cache: Duplicate cookie detected
	[  +0.000856] FS-Cache: O-cookie c=00000000af756993 [p=000000003cfe13d3 fl=226 nc=0 na=1]
	[  +0.001346] FS-Cache: O-cookie d=00000000d0f41ca1 n=000000009356b987
	[  +0.001071] FS-Cache: O-key=[8] 'c562010000000000'
	[  +0.000838] FS-Cache: N-cookie c=0000000062b369eb [p=000000003cfe13d3 fl=2 nc=0 na=1]
	[  +0.001331] FS-Cache: N-cookie d=00000000d0f41ca1 n=00000000e0c82591
	[  +0.001061] FS-Cache: N-key=[8] 'c562010000000000'
	[  +0.001531] FS-Cache: Duplicate cookie detected
	[  +0.000801] FS-Cache: O-cookie c=00000000ccb09f62 [p=000000003cfe13d3 fl=226 nc=0 na=1]
	[  +0.001326] FS-Cache: O-cookie d=00000000d0f41ca1 n=000000001c672d8a
	[  +0.001069] FS-Cache: O-key=[8] 'c762010000000000'
	[  +0.001140] FS-Cache: N-cookie c=0000000062b369eb [p=000000003cfe13d3 fl=2 nc=0 na=1]
	[  +0.001307] FS-Cache: N-cookie d=00000000d0f41ca1 n=0000000083a2ea2e
	[  +0.001068] FS-Cache: N-key=[8] 'c762010000000000'
	[  +0.001828] FS-Cache: Duplicate cookie detected
	[  +0.000775] FS-Cache: O-cookie c=0000000089195cf5 [p=000000003cfe13d3 fl=226 nc=0 na=1]
	[  +0.001346] FS-Cache: O-cookie d=00000000d0f41ca1 n=0000000024759c93
	[  +0.001076] FS-Cache: O-key=[8] 'c662010000000000'
	[  +0.000853] FS-Cache: N-cookie c=0000000062b369eb [p=000000003cfe13d3 fl=2 nc=0 na=1]
	[  +0.001320] FS-Cache: N-cookie d=00000000d0f41ca1 n=00000000f79fca59
	[  +0.001058] FS-Cache: N-key=[8] 'c662010000000000'
	
	* 
	* ==> etcd [daa4bc492ed7] <==
	* 2021-08-11 00:31:30.568152 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 00:31:40.565963 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 00:31:50.570132 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 00:32:00.566454 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 00:32:10.566448 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 00:32:20.566000 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 00:32:30.566032 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 00:32:40.566114 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 00:32:50.566765 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 00:33:00.566080 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 00:33:10.566895 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 00:33:20.566697 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 00:33:30.565878 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 00:33:40.566563 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 00:33:50.566261 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 00:34:00.566545 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 00:34:10.566101 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 00:34:20.566261 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 00:34:30.566469 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 00:34:40.566194 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 00:34:50.565895 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 00:35:00.566330 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 00:35:10.566237 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 00:35:20.566040 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 00:35:30.566446 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  00:35:34 up 10:18,  0 users,  load average: 0.97, 1.96, 2.58
	Linux addons-20210811003021-1387367 5.8.0-1041-aws #43~20.04.1-Ubuntu SMP Thu Jul 15 11:03:27 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [107ea2d3d596] <==
	* E0811 00:31:45.992225       1 available_controller.go:508] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.62.170:443/apis/metrics.k8s.io/v1beta1: Get "https://10.107.62.170:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.107.62.170:443: connect: connection refused
	I0811 00:31:50.563247       1 client.go:360] parsed scheme: "endpoint"
	I0811 00:31:50.563292       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0811 00:31:50.615736       1 client.go:360] parsed scheme: "endpoint"
	I0811 00:31:50.615771       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0811 00:31:50.944487       1 client.go:360] parsed scheme: "endpoint"
	I0811 00:31:50.944529       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	I0811 00:32:10.570358       1 client.go:360] parsed scheme: "passthrough"
	I0811 00:32:10.570402       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0811 00:32:10.570411       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0811 00:32:54.476101       1 client.go:360] parsed scheme: "passthrough"
	I0811 00:32:54.476146       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0811 00:32:54.476155       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0811 00:33:30.063609       1 client.go:360] parsed scheme: "passthrough"
	I0811 00:33:30.063672       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0811 00:33:30.063682       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0811 00:34:09.055310       1 client.go:360] parsed scheme: "passthrough"
	I0811 00:34:09.055354       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0811 00:34:09.055362       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0811 00:34:47.087863       1 client.go:360] parsed scheme: "passthrough"
	I0811 00:34:47.087969       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0811 00:34:47.087995       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0811 00:35:20.426773       1 client.go:360] parsed scheme: "passthrough"
	I0811 00:35:20.426821       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0811 00:35:20.426847       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-controller-manager [178036f64854] <==
	* I0811 00:31:31.010688       1 event.go:291] "Event occurred" object="kube-system/csi-hostpath-attacher" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful"
	I0811 00:31:31.125642       1 event.go:291] "Event occurred" object="kube-system/csi-hostpathplugin" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful"
	I0811 00:31:31.267458       1 event.go:291] "Event occurred" object="kube-system/csi-hostpath-provisioner" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful"
	I0811 00:31:31.363396       1 event.go:291] "Event occurred" object="kube-system/csi-hostpath-resizer" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful"
	I0811 00:31:31.473562       1 event.go:291] "Event occurred" object="olm/olm-operator" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set olm-operator-859c88c96 to 1"
	I0811 00:31:31.559602       1 event.go:291] "Event occurred" object="olm/olm-operator-859c88c96" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: olm-operator-859c88c96-zfpv9"
	I0811 00:31:31.620453       1 event.go:291] "Event occurred" object="olm/catalog-operator" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set catalog-operator-75d496484d to 1"
	I0811 00:31:31.661627       1 event.go:291] "Event occurred" object="kube-system/csi-hostpath-snapshotter" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod csi-hostpath-snapshotter-0 in StatefulSet csi-hostpath-snapshotter successful"
	I0811 00:31:31.669704       1 event.go:291] "Event occurred" object="olm/catalog-operator-75d496484d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: catalog-operator-75d496484d-lftth"
	I0811 00:31:31.898491       1 event.go:291] "Event occurred" object="ingress-nginx/ingress-nginx-admission-create" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: ingress-nginx-admission-create-grgf6"
	I0811 00:31:32.091158       1 event.go:291] "Event occurred" object="ingress-nginx/ingress-nginx-admission-patch" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: ingress-nginx-admission-patch-xnff6"
	I0811 00:31:47.393791       1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-certs-create" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0811 00:31:49.635088       1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0811 00:31:50.482378       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for subscriptions.operators.coreos.com
	I0811 00:31:50.482416       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for volumesnapshots.snapshot.storage.k8s.io
	I0811 00:31:50.482448       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for clusterserviceversions.operators.coreos.com
	I0811 00:31:50.482467       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for installplans.operators.coreos.com
	I0811 00:31:50.482490       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for catalogsources.operators.coreos.com
	I0811 00:31:50.482519       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for operatorgroups.operators.coreos.com
	I0811 00:31:50.482571       1 shared_informer.go:240] Waiting for caches to sync for resource quota
	I0811 00:31:50.683331       1 shared_informer.go:247] Caches are synced for resource quota 
	I0811 00:31:50.915593       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0811 00:31:51.015957       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0811 00:32:02.036947       1 event.go:291] "Event occurred" object="ingress-nginx/ingress-nginx-admission-create" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0811 00:32:03.086565       1 event.go:291] "Event occurred" object="ingress-nginx/ingress-nginx-admission-patch" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	
	* 
	* ==> kube-proxy [3e17f7de9e8a] <==
	* I0811 00:31:21.677244       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0811 00:31:21.677330       1 server_others.go:140] Detected node IP 192.168.49.2
	W0811 00:31:21.677372       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0811 00:31:21.789613       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0811 00:31:21.789651       1 server_others.go:212] Using iptables Proxier.
	I0811 00:31:21.789661       1 server_others.go:219] creating dualStackProxier for iptables.
	W0811 00:31:21.789673       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0811 00:31:21.789947       1 server.go:643] Version: v1.21.3
	I0811 00:31:21.855080       1 config.go:315] Starting service config controller
	I0811 00:31:21.855098       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0811 00:31:21.855222       1 config.go:224] Starting endpoint slice config controller
	I0811 00:31:21.855227       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0811 00:31:21.868516       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0811 00:31:21.870560       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0811 00:31:21.955804       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0811 00:31:21.955864       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [4f7326edc3cf] <==
	* W0811 00:31:03.888717       1 authentication.go:338] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0811 00:31:03.888737       1 authentication.go:339] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0811 00:31:04.007158       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0811 00:31:04.010685       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0811 00:31:04.010727       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0811 00:31:04.022524       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0811 00:31:04.023548       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0811 00:31:04.024389       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0811 00:31:04.024468       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0811 00:31:04.024200       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0811 00:31:04.024270       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0811 00:31:04.024328       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0811 00:31:04.024586       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0811 00:31:04.024664       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0811 00:31:04.024722       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0811 00:31:04.024777       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0811 00:31:04.024827       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0811 00:31:04.024886       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0811 00:31:04.025054       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0811 00:31:04.025179       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0811 00:31:04.873178       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0811 00:31:04.947036       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0811 00:31:04.986978       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0811 00:31:05.021088       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0811 00:31:07.122862       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2021-08-11 00:30:27 UTC, end at Wed 2021-08-11 00:35:34 UTC. --
	Aug 11 00:35:11 addons-20210811003021-1387367 kubelet[2321]: E0811 00:35:11.700328    2321 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"catalog-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=catalog-operator pod=catalog-operator-75d496484d-lftth_olm(d8dfd948-ff17-4767-8371-4be73646cb5d)\"" pod="olm/catalog-operator-75d496484d-lftth" podUID=d8dfd948-ff17-4767-8371-4be73646cb5d
	Aug 11 00:35:11 addons-20210811003021-1387367 kubelet[2321]: I0811 00:35:11.942445    2321 scope.go:111] "RemoveContainer" containerID="93d05e6a4fdeadc429b3b8680409bcc95a4911306bd7c468e2c22e8baba6c554"
	Aug 11 00:35:11 addons-20210811003021-1387367 kubelet[2321]: E0811 00:35:11.942873    2321 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"catalog-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=catalog-operator pod=catalog-operator-75d496484d-lftth_olm(d8dfd948-ff17-4767-8371-4be73646cb5d)\"" pod="olm/catalog-operator-75d496484d-lftth" podUID=d8dfd948-ff17-4767-8371-4be73646cb5d
	Aug 11 00:35:12 addons-20210811003021-1387367 kubelet[2321]: I0811 00:35:12.569632    2321 scope.go:111] "RemoveContainer" containerID="b14e7fada5642b72a77355be3587194eca52b65c2e16b1a9d03d4a29ec8ff73c"
	Aug 11 00:35:12 addons-20210811003021-1387367 kubelet[2321]: E0811 00:35:12.569931    2321 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=registry-proxy pod=registry-proxy-xfrxz_kube-system(19d31762-bc36-413f-8533-e97b57d38a28)\"" pod="kube-system/registry-proxy-xfrxz" podUID=19d31762-bc36-413f-8533-e97b57d38a28
	Aug 11 00:35:14 addons-20210811003021-1387367 kubelet[2321]: I0811 00:35:14.569913    2321 scope.go:111] "RemoveContainer" containerID="38c2a91f09c071cbeb4b3668c2ebd6a57f5b9c1dfdc3b138a383795cc803f634"
	Aug 11 00:35:14 addons-20210811003021-1387367 kubelet[2321]: I0811 00:35:14.998077    2321 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for olm/olm-operator-859c88c96-zfpv9 through plugin: invalid network status for"
	Aug 11 00:35:15 addons-20210811003021-1387367 kubelet[2321]: I0811 00:35:15.003640    2321 scope.go:111] "RemoveContainer" containerID="38c2a91f09c071cbeb4b3668c2ebd6a57f5b9c1dfdc3b138a383795cc803f634"
	Aug 11 00:35:15 addons-20210811003021-1387367 kubelet[2321]: I0811 00:35:15.004080    2321 scope.go:111] "RemoveContainer" containerID="92a9c313b50af3db83df40b9e3de0bb7efa59a28ae8f1cc94597b359047e2d81"
	Aug 11 00:35:15 addons-20210811003021-1387367 kubelet[2321]: E0811 00:35:15.005120    2321 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"olm-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=olm-operator pod=olm-operator-859c88c96-zfpv9_olm(2866bd0c-37ae-465c-915f-d324574f23f7)\"" pod="olm/olm-operator-859c88c96-zfpv9" podUID=2866bd0c-37ae-465c-915f-d324574f23f7
	Aug 11 00:35:16 addons-20210811003021-1387367 kubelet[2321]: I0811 00:35:16.018582    2321 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for olm/olm-operator-859c88c96-zfpv9 through plugin: invalid network status for"
	Aug 11 00:35:21 addons-20210811003021-1387367 kubelet[2321]: I0811 00:35:21.581971    2321 scope.go:111] "RemoveContainer" containerID="92a9c313b50af3db83df40b9e3de0bb7efa59a28ae8f1cc94597b359047e2d81"
	Aug 11 00:35:21 addons-20210811003021-1387367 kubelet[2321]: E0811 00:35:21.582423    2321 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"olm-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=olm-operator pod=olm-operator-859c88c96-zfpv9_olm(2866bd0c-37ae-465c-915f-d324574f23f7)\"" pod="olm/olm-operator-859c88c96-zfpv9" podUID=2866bd0c-37ae-465c-915f-d324574f23f7
	Aug 11 00:35:22 addons-20210811003021-1387367 kubelet[2321]: I0811 00:35:22.098463    2321 scope.go:111] "RemoveContainer" containerID="92a9c313b50af3db83df40b9e3de0bb7efa59a28ae8f1cc94597b359047e2d81"
	Aug 11 00:35:22 addons-20210811003021-1387367 kubelet[2321]: E0811 00:35:22.098857    2321 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"olm-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=olm-operator pod=olm-operator-859c88c96-zfpv9_olm(2866bd0c-37ae-465c-915f-d324574f23f7)\"" pod="olm/olm-operator-859c88c96-zfpv9" podUID=2866bd0c-37ae-465c-915f-d324574f23f7
	Aug 11 00:35:23 addons-20210811003021-1387367 kubelet[2321]: I0811 00:35:23.569760    2321 scope.go:111] "RemoveContainer" containerID="93d05e6a4fdeadc429b3b8680409bcc95a4911306bd7c468e2c22e8baba6c554"
	Aug 11 00:35:23 addons-20210811003021-1387367 kubelet[2321]: E0811 00:35:23.570191    2321 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"catalog-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=catalog-operator pod=catalog-operator-75d496484d-lftth_olm(d8dfd948-ff17-4767-8371-4be73646cb5d)\"" pod="olm/catalog-operator-75d496484d-lftth" podUID=d8dfd948-ff17-4767-8371-4be73646cb5d
	Aug 11 00:35:24 addons-20210811003021-1387367 kubelet[2321]: I0811 00:35:24.569489    2321 scope.go:111] "RemoveContainer" containerID="b14e7fada5642b72a77355be3587194eca52b65c2e16b1a9d03d4a29ec8ff73c"
	Aug 11 00:35:24 addons-20210811003021-1387367 kubelet[2321]: E0811 00:35:24.569925    2321 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=registry-proxy pod=registry-proxy-xfrxz_kube-system(19d31762-bc36-413f-8533-e97b57d38a28)\"" pod="kube-system/registry-proxy-xfrxz" podUID=19d31762-bc36-413f-8533-e97b57d38a28
	Aug 11 00:35:33 addons-20210811003021-1387367 kubelet[2321]: I0811 00:35:33.249830    2321 scope.go:111] "RemoveContainer" containerID="716be14bfe61f9911006241516a4ba5a00a7f3a116d4cfb5970e74c919b456a0"
	Aug 11 00:35:33 addons-20210811003021-1387367 kubelet[2321]: I0811 00:35:33.278212    2321 scope.go:111] "RemoveContainer" containerID="716be14bfe61f9911006241516a4ba5a00a7f3a116d4cfb5970e74c919b456a0"
	Aug 11 00:35:33 addons-20210811003021-1387367 kubelet[2321]: E0811 00:35:33.279028    2321 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error: No such container: 716be14bfe61f9911006241516a4ba5a00a7f3a116d4cfb5970e74c919b456a0" containerID="716be14bfe61f9911006241516a4ba5a00a7f3a116d4cfb5970e74c919b456a0"
	Aug 11 00:35:33 addons-20210811003021-1387367 kubelet[2321]: I0811 00:35:33.279076    2321 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:docker ID:716be14bfe61f9911006241516a4ba5a00a7f3a116d4cfb5970e74c919b456a0} err="failed to get container status \"716be14bfe61f9911006241516a4ba5a00a7f3a116d4cfb5970e74c919b456a0\": rpc error: code = Unknown desc = Error: No such container: 716be14bfe61f9911006241516a4ba5a00a7f3a116d4cfb5970e74c919b456a0"
	Aug 11 00:35:34 addons-20210811003021-1387367 kubelet[2321]: E0811 00:35:34.574619    2321 kuberuntime_container.go:691] "Kill container failed" err="rpc error: code = Unknown desc = Error: No such container: 716be14bfe61f9911006241516a4ba5a00a7f3a116d4cfb5970e74c919b456a0" pod="kube-system/registry-dzdlw" podUID=4a872b2d-a2b1-46f9-9afd-c52b6647383f containerName="registry" containerID={Type:docker ID:716be14bfe61f9911006241516a4ba5a00a7f3a116d4cfb5970e74c919b456a0}
	Aug 11 00:35:34 addons-20210811003021-1387367 kubelet[2321]: E0811 00:35:34.578429    2321 kubelet_pods.go:1288] "Failed killing the pod" err="failed to \"KillContainer\" for \"registry\" with KillContainerError: \"rpc error: code = Unknown desc = Error: No such container: 716be14bfe61f9911006241516a4ba5a00a7f3a116d4cfb5970e74c919b456a0\"" podName="registry-dzdlw"
	
	* 
	* ==> storage-provisioner [0d6ae04912a6] <==
	* I0811 00:31:24.716594       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0811 00:31:24.745311       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0811 00:31:24.745363       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0811 00:31:24.768040       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0811 00:31:24.768216       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-20210811003021-1387367_ae097080-2d56-4c92-b0f7-bfd9c649e5f6!
	I0811 00:31:24.771936       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"70e765f0-f18d-4a79-9f04-05826884f687", APIVersion:"v1", ResourceVersion:"493", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-20210811003021-1387367_ae097080-2d56-4c92-b0f7-bfd9c649e5f6 became leader
	I0811 00:31:24.968818       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-20210811003021-1387367_ae097080-2d56-4c92-b0f7-bfd9c649e5f6!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-20210811003021-1387367 -n addons-20210811003021-1387367
helpers_test.go:262: (dbg) Run:  kubectl --context addons-20210811003021-1387367 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:268: non-running pods: gcp-auth-certs-create-429nc gcp-auth-certs-patch-7grzk ingress-nginx-admission-create-grgf6 ingress-nginx-admission-patch-xnff6
helpers_test.go:270: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:273: (dbg) Run:  kubectl --context addons-20210811003021-1387367 describe pod gcp-auth-certs-create-429nc gcp-auth-certs-patch-7grzk ingress-nginx-admission-create-grgf6 ingress-nginx-admission-patch-xnff6
helpers_test.go:273: (dbg) Non-zero exit: kubectl --context addons-20210811003021-1387367 describe pod gcp-auth-certs-create-429nc gcp-auth-certs-patch-7grzk ingress-nginx-admission-create-grgf6 ingress-nginx-admission-patch-xnff6: exit status 1 (88.553055ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "gcp-auth-certs-create-429nc" not found
	Error from server (NotFound): pods "gcp-auth-certs-patch-7grzk" not found
	Error from server (NotFound): pods "ingress-nginx-admission-create-grgf6" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-xnff6" not found

                                                
                                                
** /stderr **
helpers_test.go:275: kubectl --context addons-20210811003021-1387367 describe pod gcp-auth-certs-create-429nc gcp-auth-certs-patch-7grzk ingress-nginx-admission-create-grgf6 ingress-nginx-admission-patch-xnff6: exit status 1
--- FAIL: TestAddons/parallel/Registry (174.52s)

                                                
                                    
x
+
TestAddons/parallel/Olm (733.12s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:463: catalog-operator stabilized in 33.989973ms

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:467: olm-operator stabilized in 37.094289ms

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:469: failed waiting for packageserver deployment to stabilize: timed out waiting for the condition
addons_test.go:471: packageserver stabilized in 6m0.0390936s
addons_test.go:473: (dbg) TestAddons/parallel/Olm: waiting 6m0s for pods matching "app=catalog-operator" in namespace "olm" ...
helpers_test.go:340: "catalog-operator-75d496484d-lftth" [d8dfd948-ff17-4767-8371-4be73646cb5d] Running / Ready:ContainersNotReady (containers with unready status: [catalog-operator]) / ContainersReady:ContainersNotReady (containers with unready status: [catalog-operator])
addons_test.go:473: (dbg) TestAddons/parallel/Olm: app=catalog-operator healthy within 5.007772688s
addons_test.go:476: (dbg) TestAddons/parallel/Olm: waiting 6m0s for pods matching "app=olm-operator" in namespace "olm" ...
helpers_test.go:340: "olm-operator-859c88c96-zfpv9" [2866bd0c-37ae-465c-915f-d324574f23f7] Running / Ready:ContainersNotReady (containers with unready status: [olm-operator]) / ContainersReady:ContainersNotReady (containers with unready status: [olm-operator])
addons_test.go:476: (dbg) TestAddons/parallel/Olm: app=olm-operator healthy within 5.006502527s
addons_test.go:479: (dbg) TestAddons/parallel/Olm: waiting 6m0s for pods matching "app=packageserver" in namespace "olm" ...
addons_test.go:479: ***** TestAddons/parallel/Olm: pod "app=packageserver" failed to start within 6m0s: timed out waiting for the condition ****
addons_test.go:479: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-20210811003021-1387367 -n addons-20210811003021-1387367
addons_test.go:479: TestAddons/parallel/Olm: showing logs for failed pods as of 2021-08-11 00:44:51.517358724 +0000 UTC m=+916.305594072
addons_test.go:480: failed waiting for pod packageserver: app=packageserver within 6m0s: timed out waiting for the condition
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestAddons/parallel/Olm]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect addons-20210811003021-1387367
helpers_test.go:236: (dbg) docker inspect addons-20210811003021-1387367:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5aa46682b7745ea0fa910d163361711d9eb6c9cfab6072314bf6857be01b2120",
	        "Created": "2021-08-11T00:30:25.788956339Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1388276,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-11T00:30:26.269675899Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ba5ae658d5b3f017bdb597cc46a1912d5eed54239e31b777788d204fdcbc4445",
	        "ResolvConfPath": "/var/lib/docker/containers/5aa46682b7745ea0fa910d163361711d9eb6c9cfab6072314bf6857be01b2120/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5aa46682b7745ea0fa910d163361711d9eb6c9cfab6072314bf6857be01b2120/hostname",
	        "HostsPath": "/var/lib/docker/containers/5aa46682b7745ea0fa910d163361711d9eb6c9cfab6072314bf6857be01b2120/hosts",
	        "LogPath": "/var/lib/docker/containers/5aa46682b7745ea0fa910d163361711d9eb6c9cfab6072314bf6857be01b2120/5aa46682b7745ea0fa910d163361711d9eb6c9cfab6072314bf6857be01b2120-json.log",
	        "Name": "/addons-20210811003021-1387367",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-20210811003021-1387367:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-20210811003021-1387367",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/d7bbc23e4363abd5f8bd0174d71334b755b7a86aae7b28260349c3000af1c495-init/diff:/var/lib/docker/overlay2/b901673749d4c23cf617379d66c43acbc184f898f580a05fca5568725e6ccb6a/diff:/var/lib/docker/overlay2/3fd19ee2c9d46b2cdb8a592d42d57d9efdba3a556c98f5018ae07caa15606bc4/diff:/var/lib/docker/overlay2/31f547e426e6dfa6ed65e0b7cb851c18e771f23a77868552685aacb2e126dc0a/diff:/var/lib/docker/overlay2/6ae53b304b800757235653c63c7879ae7f05b4d4f0400f7f6fadc53e2059aa5a/diff:/var/lib/docker/overlay2/7702d6ed068e8b454dd11af18cb8cb76986898926e3e3130c2d7f638062de9ee/diff:/var/lib/docker/overlay2/e67b0ce82f4d6c092698530106fa38495aa54b2fe5600ac022386a3d17165948/diff:/var/lib/docker/overlay2/d3ddbdbbe88f3c5a0867637eeb78a22790daa833a6179cdd4690044007911336/diff:/var/lib/docker/overlay2/10c48536a5187dfe63f1c090ec32daef76e852de7cc4a7e7f96a2fa1510314cc/diff:/var/lib/docker/overlay2/2186c26bc131feb045ca64a28e2cc431fed76b32afc3d3587916b98a9af807fe/diff:/var/lib/docker/overlay2/292c9d
aaf6d60ee235c7ac65bfc1b61b9c0d360ebbebcf08ba5efeb1b40de075/diff:/var/lib/docker/overlay2/9bc521e84afeeb62fa312e9eb2afc367bc449dbf66f412e17eb2338f79d6f920/diff:/var/lib/docker/overlay2/b1a93cf97438f068af56026fc52aaa329c46e4cac3d8f91c8d692871adaf451a/diff:/var/lib/docker/overlay2/b8e42d5d9e69e72a11e3cad660b9f29335dfc6cd1b4a6aebdbf5e6f313efe749/diff:/var/lib/docker/overlay2/6a6eaef3ce06d941ce606aaebc530878ce54d24a51c7947ca936a3a6eb4dac16/diff:/var/lib/docker/overlay2/62370bd2a6e35ce796647f79ccf9906147c91e8ceee31e401bdb7842371c6bee/diff:/var/lib/docker/overlay2/e673dacc1c6815100340b85af47aeb90eb5fca87778caec1d728de5b8cc9a36e/diff:/var/lib/docker/overlay2/bd17ea1d8cd8e2f88bd7fb4cee8a097365f6b81efc91f203a0504873fc0916a6/diff:/var/lib/docker/overlay2/d2f15007a2a5c037903647e5dd0d6882903fa163d23087bbd8eadeaf3618377b/diff:/var/lib/docker/overlay2/0bbc7fe1b1d62a2db9b4f402e6bc8781815951ae6df608307fd50a2fde242253/diff:/var/lib/docker/overlay2/d124fa0a0ea67ad0362eec0adf1f3e7cbd885b2cf4c31f83e917d97a09a791af/diff:/var/lib/d
ocker/overlay2/ee74e2f91490ecb544a95b306f1001046f3c4656413878d09be8bf67de7b4c4f/diff:/var/lib/docker/overlay2/4279b3790ea6aeb262c4ecd9cf4aae5beb1430f4fbb599b49ff27d0f7b3a9714/diff:/var/lib/docker/overlay2/b7fd6a0c88249dbf5e233463fbe08559ca287465617e7721977a002204ea3af5/diff:/var/lib/docker/overlay2/c495a83eeda1cf6df33d49341ee01f15738845e6330c0a5b3c29e11fdc4733b0/diff:/var/lib/docker/overlay2/ac747f0260d49943953568bbbe150f3a4f28d70bd82f40d0485ef13b12195044/diff:/var/lib/docker/overlay2/aa98d62ac831ecd60bc1acfa1708c0648c306bb7fa187026b472e9ae5c3364a4/diff:/var/lib/docker/overlay2/34829b132a53df856a1be03aa46565640e20cb075db18bd9775a5055fe0c0b22/diff:/var/lib/docker/overlay2/85a074fe6f79f3ea9d8b2f628355f41bb4f73b398257f8b6659bc171d86a0736/diff:/var/lib/docker/overlay2/c8c145d2e68e655880cd5c8fae8cb9f7cbd6b112f1f64fced224b17d4f60fbc7/diff:/var/lib/docker/overlay2/7480ad16aa2479be3569dd07eca685bc3a37a785e7ff281c448c7ca718cc67c3/diff:/var/lib/docker/overlay2/519f1304b1b8ee2daf8c1b9411f3e46d4fedacc8d6446937321372c4e8d
f2cb9/diff:/var/lib/docker/overlay2/246fcb20bef1dbfdc41186d1b7143566cd571a067830cc3f946b232024c2e85c/diff:/var/lib/docker/overlay2/f5f15e6d497abc56d9a2d901ed821a56e6f3effe2fc8d6c3ef64297faea15179/diff:/var/lib/docker/overlay2/3aa1fb1105e860c53ef63317f6757f9629a4a20f35764d976df2b0f0cee5d4f2/diff:/var/lib/docker/overlay2/765f7cba41acbb266d2cef89f2a76a5659b78c3b075223bf23257ac44acfe177/diff:/var/lib/docker/overlay2/53179410fe05d9ddea0a22ba2c123ca8e75f9c7839c2a64902e411e2bda2de23/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d7bbc23e4363abd5f8bd0174d71334b755b7a86aae7b28260349c3000af1c495/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d7bbc23e4363abd5f8bd0174d71334b755b7a86aae7b28260349c3000af1c495/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d7bbc23e4363abd5f8bd0174d71334b755b7a86aae7b28260349c3000af1c495/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-20210811003021-1387367",
	                "Source": "/var/lib/docker/volumes/addons-20210811003021-1387367/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-20210811003021-1387367",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-20210811003021-1387367",
	                "name.minikube.sigs.k8s.io": "addons-20210811003021-1387367",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "17fbfc5588d01a72b28ff1d6c58d2e4bb8f2d21449a18677b10dd71b3b83ded4",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50250"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50249"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50246"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50248"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50247"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/17fbfc5588d0",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-20210811003021-1387367": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "5aa46682b774",
	                        "addons-20210811003021-1387367"
	                    ],
	                    "NetworkID": "6dba5b957173120a4aafdf3873eab586b4a4a9b5791668afbe348cef17103048",
	                    "EndpointID": "d99047e7dbe6428356a66d026486a85ca7cdfff3ea6f120c69d9470809fd105b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-20210811003021-1387367 -n addons-20210811003021-1387367
helpers_test.go:245: <<< TestAddons/parallel/Olm FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestAddons/parallel/Olm]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p addons-20210811003021-1387367 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p addons-20210811003021-1387367 logs -n 25: (1.499769292s)
helpers_test.go:253: TestAddons/parallel/Olm logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                  Args                  |                Profile                 |  User   | Version |          Start Time           |           End Time            |
	|---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | --all                                  | download-only-20210811002935-1387367   | jenkins | v1.22.0 | Wed, 11 Aug 2021 00:30:07 UTC | Wed, 11 Aug 2021 00:30:07 UTC |
	| delete  | -p                                     | download-only-20210811002935-1387367   | jenkins | v1.22.0 | Wed, 11 Aug 2021 00:30:07 UTC | Wed, 11 Aug 2021 00:30:07 UTC |
	|         | download-only-20210811002935-1387367   |                                        |         |         |                               |                               |
	| delete  | -p                                     | download-only-20210811002935-1387367   | jenkins | v1.22.0 | Wed, 11 Aug 2021 00:30:07 UTC | Wed, 11 Aug 2021 00:30:08 UTC |
	|         | download-only-20210811002935-1387367   |                                        |         |         |                               |                               |
	| delete  | -p                                     | download-docker-20210811003008-1387367 | jenkins | v1.22.0 | Wed, 11 Aug 2021 00:30:21 UTC | Wed, 11 Aug 2021 00:30:21 UTC |
	|         | download-docker-20210811003008-1387367 |                                        |         |         |                               |                               |
	| start   | -p                                     | addons-20210811003021-1387367          | jenkins | v1.22.0 | Wed, 11 Aug 2021 00:30:21 UTC | Wed, 11 Aug 2021 00:32:41 UTC |
	|         | addons-20210811003021-1387367          |                                        |         |         |                               |                               |
	|         | --wait=true --memory=4000              |                                        |         |         |                               |                               |
	|         | --alsologtostderr                      |                                        |         |         |                               |                               |
	|         | --addons=registry                      |                                        |         |         |                               |                               |
	|         | --addons=metrics-server                |                                        |         |         |                               |                               |
	|         | --addons=olm                           |                                        |         |         |                               |                               |
	|         | --addons=volumesnapshots               |                                        |         |         |                               |                               |
	|         | --addons=csi-hostpath-driver           |                                        |         |         |                               |                               |
	|         | --driver=docker                        |                                        |         |         |                               |                               |
	|         | --container-runtime=docker             |                                        |         |         |                               |                               |
	|         | --addons=ingress                       |                                        |         |         |                               |                               |
	|         | --addons=gcp-auth                      |                                        |         |         |                               |                               |
	| -p      | addons-20210811003021-1387367          | addons-20210811003021-1387367          | jenkins | v1.22.0 | Wed, 11 Aug 2021 00:32:54 UTC | Wed, 11 Aug 2021 00:32:54 UTC |
	|         | ip                                     |                                        |         |         |                               |                               |
	| -p      | addons-20210811003021-1387367          | addons-20210811003021-1387367          | jenkins | v1.22.0 | Wed, 11 Aug 2021 00:35:32 UTC | Wed, 11 Aug 2021 00:35:32 UTC |
	|         | addons disable registry                |                                        |         |         |                               |                               |
	|         | --alsologtostderr -v=1                 |                                        |         |         |                               |                               |
	| -p      | addons-20210811003021-1387367          | addons-20210811003021-1387367          | jenkins | v1.22.0 | Wed, 11 Aug 2021 00:35:33 UTC | Wed, 11 Aug 2021 00:35:34 UTC |
	|         | logs -n 25                             |                                        |         |         |                               |                               |
	| -p      | addons-20210811003021-1387367          | addons-20210811003021-1387367          | jenkins | v1.22.0 | Wed, 11 Aug 2021 00:35:45 UTC | Wed, 11 Aug 2021 00:35:51 UTC |
	|         | addons disable gcp-auth                |                                        |         |         |                               |                               |
	|         | --alsologtostderr -v=1                 |                                        |         |         |                               |                               |
	| -p      | addons-20210811003021-1387367          | addons-20210811003021-1387367          | jenkins | v1.22.0 | Wed, 11 Aug 2021 00:36:23 UTC | Wed, 11 Aug 2021 00:36:30 UTC |
	|         | addons disable                         |                                        |         |         |                               |                               |
	|         | csi-hostpath-driver                    |                                        |         |         |                               |                               |
	|         | --alsologtostderr -v=1                 |                                        |         |         |                               |                               |
	| -p      | addons-20210811003021-1387367          | addons-20210811003021-1387367          | jenkins | v1.22.0 | Wed, 11 Aug 2021 00:36:30 UTC | Wed, 11 Aug 2021 00:36:31 UTC |
	|         | addons disable volumesnapshots         |                                        |         |         |                               |                               |
	|         | --alsologtostderr -v=1                 |                                        |         |         |                               |                               |
	| -p      | addons-20210811003021-1387367          | addons-20210811003021-1387367          | jenkins | v1.22.0 | Wed, 11 Aug 2021 00:36:36 UTC | Wed, 11 Aug 2021 00:36:37 UTC |
	|         | addons disable metrics-server          |                                        |         |         |                               |                               |
	|         | --alsologtostderr -v=1                 |                                        |         |         |                               |                               |
	| -p      | addons-20210811003021-1387367          | addons-20210811003021-1387367          | jenkins | v1.22.0 | Wed, 11 Aug 2021 00:36:46 UTC | Wed, 11 Aug 2021 00:36:46 UTC |
	|         | ssh curl -s http://127.0.0.1/          |                                        |         |         |                               |                               |
	|         | -H 'Host: nginx.example.com'           |                                        |         |         |                               |                               |
	| -p      | addons-20210811003021-1387367          | addons-20210811003021-1387367          | jenkins | v1.22.0 | Wed, 11 Aug 2021 00:36:47 UTC | Wed, 11 Aug 2021 00:36:47 UTC |
	|         | ssh curl -s http://127.0.0.1/          |                                        |         |         |                               |                               |
	|         | -H 'Host: nginx.example.com'           |                                        |         |         |                               |                               |
	| -p      | addons-20210811003021-1387367          | addons-20210811003021-1387367          | jenkins | v1.22.0 | Wed, 11 Aug 2021 00:36:47 UTC | Wed, 11 Aug 2021 00:37:15 UTC |
	|         | addons disable ingress                 |                                        |         |         |                               |                               |
	|         | --alsologtostderr -v=1                 |                                        |         |         |                               |                               |
	|---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/11 00:30:21
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.16.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0811 00:30:21.602659 1387850 out.go:298] Setting OutFile to fd 1 ...
	I0811 00:30:21.602845 1387850 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0811 00:30:21.602855 1387850 out.go:311] Setting ErrFile to fd 2...
	I0811 00:30:21.602859 1387850 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0811 00:30:21.603002 1387850 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/bin
	I0811 00:30:21.603313 1387850 out.go:305] Setting JSON to false
	I0811 00:30:21.604120 1387850 start.go:111] hostinfo: {"hostname":"ip-172-31-31-251","uptime":36768,"bootTime":1628605053,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0811 00:30:21.604207 1387850 start.go:121] virtualization:  
	I0811 00:30:21.607468 1387850 out.go:177] * [addons-20210811003021-1387367] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	I0811 00:30:21.611463 1387850 out.go:177]   - MINIKUBE_LOCATION=12230
	I0811 00:30:21.609464 1387850 notify.go:169] Checking for updates...
	I0811 00:30:21.615278 1387850 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	I0811 00:30:21.618400 1387850 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube
	I0811 00:30:21.621705 1387850 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0811 00:30:21.621941 1387850 driver.go:335] Setting default libvirt URI to qemu:///system
	I0811 00:30:21.658581 1387850 docker.go:132] docker version: linux-20.10.8
	I0811 00:30:21.658691 1387850 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0811 00:30:21.762135 1387850 info.go:263] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:46 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:34 SystemTime:2021-08-11 00:30:21.69832939 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226263040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInf
o:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0811 00:30:21.762295 1387850 docker.go:244] overlay module found
	I0811 00:30:21.764908 1387850 out.go:177] * Using the docker driver based on user configuration
	I0811 00:30:21.764929 1387850 start.go:278] selected driver: docker
	I0811 00:30:21.764934 1387850 start.go:751] validating driver "docker" against <nil>
	I0811 00:30:21.764951 1387850 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0811 00:30:21.765000 1387850 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0811 00:30:21.765023 1387850 out.go:242] ! Your cgroup does not allow setting memory.
	I0811 00:30:21.767459 1387850 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0811 00:30:21.767848 1387850 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0811 00:30:21.854139 1387850 info.go:263] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:46 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:34 SystemTime:2021-08-11 00:30:21.794641916 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226263040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0811 00:30:21.854262 1387850 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0811 00:30:21.854419 1387850 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0811 00:30:21.854442 1387850 cni.go:93] Creating CNI manager for ""
	I0811 00:30:21.854449 1387850 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0811 00:30:21.854458 1387850 start_flags.go:277] config:
	{Name:addons-20210811003021-1387367 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:addons-20210811003021-1387367 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket
: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0811 00:30:21.856879 1387850 out.go:177] * Starting control plane node addons-20210811003021-1387367 in cluster addons-20210811003021-1387367
	I0811 00:30:21.856928 1387850 cache.go:117] Beginning downloading kic base image for docker with docker
	I0811 00:30:21.858843 1387850 out.go:177] * Pulling base image ...
	I0811 00:30:21.858881 1387850 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime docker
	I0811 00:30:21.858920 1387850 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-docker-overlay2-arm64.tar.lz4
	I0811 00:30:21.858936 1387850 cache.go:56] Caching tarball of preloaded images
	I0811 00:30:21.859099 1387850 preload.go:173] Found /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0811 00:30:21.859124 1387850 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on docker
	I0811 00:30:21.859416 1387850 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/config.json ...
	I0811 00:30:21.859452 1387850 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/config.json: {Name:mkad62a8ef7b1cb9eac286f0a4233efc658a409a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 00:30:21.859624 1387850 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	I0811 00:30:21.914689 1387850 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0811 00:30:21.914718 1387850 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0811 00:30:21.914731 1387850 cache.go:205] Successfully downloaded all kic artifacts
	I0811 00:30:21.914776 1387850 start.go:313] acquiring machines lock for addons-20210811003021-1387367: {Name:mk226548caa021fe6ed2b9069936448c3d09f345 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0811 00:30:21.914932 1387850 start.go:317] acquired machines lock for "addons-20210811003021-1387367" in 132.463µs
	I0811 00:30:21.914971 1387850 start.go:89] Provisioning new machine with config: &{Name:addons-20210811003021-1387367 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:addons-20210811003021-1387367 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0811 00:30:21.915061 1387850 start.go:126] createHost starting for "" (driver="docker")
	I0811 00:30:21.917526 1387850 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0811 00:30:21.917773 1387850 start.go:160] libmachine.API.Create for "addons-20210811003021-1387367" (driver="docker")
	I0811 00:30:21.917815 1387850 client.go:168] LocalClient.Create starting
	I0811 00:30:21.917923 1387850 main.go:130] libmachine: Creating CA: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem
	I0811 00:30:22.339798 1387850 main.go:130] libmachine: Creating client certificate: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem
	I0811 00:30:22.974163 1387850 cli_runner.go:115] Run: docker network inspect addons-20210811003021-1387367 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0811 00:30:23.003309 1387850 cli_runner.go:162] docker network inspect addons-20210811003021-1387367 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0811 00:30:23.003391 1387850 network_create.go:255] running [docker network inspect addons-20210811003021-1387367] to gather additional debugging logs...
	I0811 00:30:23.003413 1387850 cli_runner.go:115] Run: docker network inspect addons-20210811003021-1387367
	W0811 00:30:23.032304 1387850 cli_runner.go:162] docker network inspect addons-20210811003021-1387367 returned with exit code 1
	I0811 00:30:23.032336 1387850 network_create.go:258] error running [docker network inspect addons-20210811003021-1387367]: docker network inspect addons-20210811003021-1387367: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: addons-20210811003021-1387367
	I0811 00:30:23.032348 1387850 network_create.go:260] output of [docker network inspect addons-20210811003021-1387367]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: addons-20210811003021-1387367
	
	** /stderr **
	I0811 00:30:23.032405 1387850 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0811 00:30:23.062238 1387850 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0x40000d7398] misses:0}
	I0811 00:30:23.062294 1387850 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0811 00:30:23.062314 1387850 network_create.go:106] attempt to create docker network addons-20210811003021-1387367 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0811 00:30:23.062373 1387850 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20210811003021-1387367
	I0811 00:30:23.131311 1387850 network_create.go:90] docker network addons-20210811003021-1387367 192.168.49.0/24 created
	I0811 00:30:23.131341 1387850 kic.go:106] calculated static IP "192.168.49.2" for the "addons-20210811003021-1387367" container
	I0811 00:30:23.131409 1387850 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0811 00:30:23.160364 1387850 cli_runner.go:115] Run: docker volume create addons-20210811003021-1387367 --label name.minikube.sigs.k8s.io=addons-20210811003021-1387367 --label created_by.minikube.sigs.k8s.io=true
	I0811 00:30:23.190804 1387850 oci.go:102] Successfully created a docker volume addons-20210811003021-1387367
	I0811 00:30:23.190897 1387850 cli_runner.go:115] Run: docker run --rm --name addons-20210811003021-1387367-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20210811003021-1387367 --entrypoint /usr/bin/test -v addons-20210811003021-1387367:/var gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -d /var/lib
	I0811 00:30:25.611528 1387850 cli_runner.go:168] Completed: docker run --rm --name addons-20210811003021-1387367-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20210811003021-1387367 --entrypoint /usr/bin/test -v addons-20210811003021-1387367:/var gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -d /var/lib: (2.420589766s)
	I0811 00:30:25.611562 1387850 oci.go:106] Successfully prepared a docker volume addons-20210811003021-1387367
	W0811 00:30:25.611598 1387850 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0811 00:30:25.611608 1387850 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0811 00:30:25.611675 1387850 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0811 00:30:25.611691 1387850 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime docker
	I0811 00:30:25.611714 1387850 kic.go:179] Starting extracting preloaded images to volume ...
	I0811 00:30:25.611770 1387850 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-20210811003021-1387367:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir
	I0811 00:30:25.746101 1387850 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-20210811003021-1387367 --name addons-20210811003021-1387367 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20210811003021-1387367 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-20210811003021-1387367 --network addons-20210811003021-1387367 --ip 192.168.49.2 --volume addons-20210811003021-1387367:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79
	I0811 00:30:26.279482 1387850 cli_runner.go:115] Run: docker container inspect addons-20210811003021-1387367 --format={{.State.Running}}
	I0811 00:30:26.347407 1387850 cli_runner.go:115] Run: docker container inspect addons-20210811003021-1387367 --format={{.State.Status}}
	I0811 00:30:26.400431 1387850 cli_runner.go:115] Run: docker exec addons-20210811003021-1387367 stat /var/lib/dpkg/alternatives/iptables
	I0811 00:30:26.499917 1387850 oci.go:278] the created container "addons-20210811003021-1387367" has a running status.
	I0811 00:30:26.499948 1387850 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210811003021-1387367/id_rsa...
	I0811 00:30:26.732383 1387850 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210811003021-1387367/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0811 00:30:26.881674 1387850 cli_runner.go:115] Run: docker container inspect addons-20210811003021-1387367 --format={{.State.Status}}
	I0811 00:30:26.918020 1387850 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0811 00:30:26.918042 1387850 kic_runner.go:115] Args: [docker exec --privileged addons-20210811003021-1387367 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0811 00:30:35.641601 1387850 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-20210811003021-1387367:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir: (10.02979324s)
	I0811 00:30:35.641632 1387850 kic.go:188] duration metric: took 10.029915 seconds to extract preloaded images to volume
	I0811 00:30:35.641709 1387850 cli_runner.go:115] Run: docker container inspect addons-20210811003021-1387367 --format={{.State.Status}}
	I0811 00:30:35.681545 1387850 machine.go:88] provisioning docker machine ...
	I0811 00:30:35.681590 1387850 ubuntu.go:169] provisioning hostname "addons-20210811003021-1387367"
	I0811 00:30:35.681654 1387850 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210811003021-1387367
	I0811 00:30:35.724584 1387850 main.go:130] libmachine: Using SSH client type: native
	I0811 00:30:35.724791 1387850 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil>  [] 0s} 127.0.0.1 50250 <nil> <nil>}
	I0811 00:30:35.724811 1387850 main.go:130] libmachine: About to run SSH command:
	sudo hostname addons-20210811003021-1387367 && echo "addons-20210811003021-1387367" | sudo tee /etc/hostname
	I0811 00:30:35.855478 1387850 main.go:130] libmachine: SSH cmd err, output: <nil>: addons-20210811003021-1387367
	
	I0811 00:30:35.855550 1387850 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210811003021-1387367
	I0811 00:30:35.892128 1387850 main.go:130] libmachine: Using SSH client type: native
	I0811 00:30:35.892309 1387850 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil>  [] 0s} 127.0.0.1 50250 <nil> <nil>}
	I0811 00:30:35.892335 1387850 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-20210811003021-1387367' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-20210811003021-1387367/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-20210811003021-1387367' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0811 00:30:36.016702 1387850 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0811 00:30:36.016728 1387850 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/k
ey.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube}
	I0811 00:30:36.016752 1387850 ubuntu.go:177] setting up certificates
	I0811 00:30:36.016760 1387850 provision.go:83] configureAuth start
	I0811 00:30:36.016819 1387850 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20210811003021-1387367
	I0811 00:30:36.046617 1387850 provision.go:137] copyHostCerts
	I0811 00:30:36.046706 1387850 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem (1082 bytes)
	I0811 00:30:36.046821 1387850 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem (1123 bytes)
	I0811 00:30:36.046895 1387850 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem (1679 bytes)
	I0811 00:30:36.046947 1387850 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem org=jenkins.addons-20210811003021-1387367 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-20210811003021-1387367]
	I0811 00:30:36.901481 1387850 provision.go:171] copyRemoteCerts
	I0811 00:30:36.901548 1387850 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0811 00:30:36.901597 1387850 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210811003021-1387367
	I0811 00:30:36.932010 1387850 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50250 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210811003021-1387367/id_rsa Username:docker}
	I0811 00:30:37.015797 1387850 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0811 00:30:37.032008 1387850 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem --> /etc/docker/server.pem (1261 bytes)
	I0811 00:30:37.048411 1387850 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0811 00:30:37.064819 1387850 provision.go:86] duration metric: configureAuth took 1.048044188s
	I0811 00:30:37.064842 1387850 ubuntu.go:193] setting minikube options for container-runtime
	I0811 00:30:37.065077 1387850 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210811003021-1387367
	I0811 00:30:37.094964 1387850 main.go:130] libmachine: Using SSH client type: native
	I0811 00:30:37.095136 1387850 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil>  [] 0s} 127.0.0.1 50250 <nil> <nil>}
	I0811 00:30:37.095153 1387850 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0811 00:30:37.212966 1387850 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0811 00:30:37.212986 1387850 ubuntu.go:71] root file system type: overlay
	I0811 00:30:37.213159 1387850 provision.go:308] Updating docker unit: /lib/systemd/system/docker.service ...
	I0811 00:30:37.213224 1387850 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210811003021-1387367
	I0811 00:30:37.243079 1387850 main.go:130] libmachine: Using SSH client type: native
	I0811 00:30:37.243251 1387850 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil>  [] 0s} 127.0.0.1 50250 <nil> <nil>}
	I0811 00:30:37.243366 1387850 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0811 00:30:37.365398 1387850 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0811 00:30:37.365479 1387850 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210811003021-1387367
	I0811 00:30:37.396410 1387850 main.go:130] libmachine: Using SSH client type: native
	I0811 00:30:37.396581 1387850 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil>  [] 0s} 127.0.0.1 50250 <nil> <nil>}
	I0811 00:30:37.396607 1387850 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0811 00:30:38.259628 1387850 main.go:130] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-06-02 11:55:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2021-08-11 00:30:37.360623318 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	+BindsTo=containerd.service
	 After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0811 00:30:38.259655 1387850 machine.go:91] provisioned docker machine in 2.578088023s
	I0811 00:30:38.259665 1387850 client.go:171] LocalClient.Create took 16.341840918s
	I0811 00:30:38.259674 1387850 start.go:168] duration metric: libmachine.API.Create for "addons-20210811003021-1387367" took 16.341902554s
	I0811 00:30:38.259682 1387850 start.go:267] post-start starting for "addons-20210811003021-1387367" (driver="docker")
	I0811 00:30:38.259696 1387850 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0811 00:30:38.259758 1387850 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0811 00:30:38.259813 1387850 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210811003021-1387367
	I0811 00:30:38.298448 1387850 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50250 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210811003021-1387367/id_rsa Username:docker}
	I0811 00:30:38.384125 1387850 ssh_runner.go:149] Run: cat /etc/os-release
	I0811 00:30:38.386661 1387850 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0811 00:30:38.386687 1387850 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0811 00:30:38.386698 1387850 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0811 00:30:38.386705 1387850 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0811 00:30:38.386715 1387850 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/addons for local assets ...
	I0811 00:30:38.386779 1387850 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files for local assets ...
	I0811 00:30:38.386806 1387850 start.go:270] post-start completed in 127.109195ms
	I0811 00:30:38.387133 1387850 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20210811003021-1387367
	I0811 00:30:38.416894 1387850 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/config.json ...
	I0811 00:30:38.417167 1387850 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0811 00:30:38.417220 1387850 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210811003021-1387367
	I0811 00:30:38.446953 1387850 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50250 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210811003021-1387367/id_rsa Username:docker}
	I0811 00:30:38.529083 1387850 start.go:129] duration metric: createHost completed in 16.614007292s
	I0811 00:30:38.529119 1387850 start.go:80] releasing machines lock for "addons-20210811003021-1387367", held for 16.614173157s
	I0811 00:30:38.529201 1387850 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20210811003021-1387367
	I0811 00:30:38.558592 1387850 ssh_runner.go:149] Run: systemctl --version
	I0811 00:30:38.558641 1387850 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210811003021-1387367
	I0811 00:30:38.558656 1387850 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0811 00:30:38.558720 1387850 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210811003021-1387367
	I0811 00:30:38.594358 1387850 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50250 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210811003021-1387367/id_rsa Username:docker}
	I0811 00:30:38.601093 1387850 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50250 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210811003021-1387367/id_rsa Username:docker}
	I0811 00:30:38.830574 1387850 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0811 00:30:38.840501 1387850 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0811 00:30:38.851219 1387850 cruntime.go:249] skipping containerd shutdown because we are bound to it
	I0811 00:30:38.851291 1387850 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0811 00:30:38.861277 1387850 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0811 00:30:38.874263 1387850 ssh_runner.go:149] Run: sudo systemctl unmask docker.service
	I0811 00:30:38.958499 1387850 ssh_runner.go:149] Run: sudo systemctl enable docker.socket
	I0811 00:30:39.047217 1387850 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0811 00:30:39.056705 1387850 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0811 00:30:39.146104 1387850 ssh_runner.go:149] Run: sudo systemctl start docker
	I0811 00:30:39.155707 1387850 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0811 00:30:39.205950 1387850 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0811 00:30:39.260548 1387850 out.go:204] * Preparing Kubernetes v1.21.3 on Docker 20.10.7 ...
	I0811 00:30:39.260677 1387850 cli_runner.go:115] Run: docker network inspect addons-20210811003021-1387367 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0811 00:30:39.290146 1387850 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0811 00:30:39.293407 1387850 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0811 00:30:39.302229 1387850 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime docker
	I0811 00:30:39.302303 1387850 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0811 00:30:39.341446 1387850 docker.go:535] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.21.3
	k8s.gcr.io/kube-controller-manager:v1.21.3
	k8s.gcr.io/kube-proxy:v1.21.3
	k8s.gcr.io/kube-scheduler:v1.21.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.4.1
	kubernetesui/dashboard:v2.1.0
	k8s.gcr.io/coredns/coredns:v1.8.0
	k8s.gcr.io/etcd:3.4.13-0
	kubernetesui/metrics-scraper:v1.0.4
	
	-- /stdout --
	I0811 00:30:39.341473 1387850 docker.go:466] Images already preloaded, skipping extraction
	I0811 00:30:39.341528 1387850 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0811 00:30:39.380996 1387850 docker.go:535] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.21.3
	k8s.gcr.io/kube-proxy:v1.21.3
	k8s.gcr.io/kube-controller-manager:v1.21.3
	k8s.gcr.io/kube-scheduler:v1.21.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.4.1
	kubernetesui/dashboard:v2.1.0
	k8s.gcr.io/coredns/coredns:v1.8.0
	k8s.gcr.io/etcd:3.4.13-0
	kubernetesui/metrics-scraper:v1.0.4
	
	-- /stdout --
	I0811 00:30:39.381035 1387850 cache_images.go:74] Images are preloaded, skipping loading
	I0811 00:30:39.381093 1387850 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}}
	I0811 00:30:39.515442 1387850 cni.go:93] Creating CNI manager for ""
	I0811 00:30:39.515466 1387850 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0811 00:30:39.515474 1387850 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0811 00:30:39.515487 1387850 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-20210811003021-1387367 NodeName:addons-20210811003021-1387367 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/
lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0811 00:30:39.515632 1387850 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "addons-20210811003021-1387367"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0811 00:30:39.515719 1387850 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=addons-20210811003021-1387367 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:addons-20210811003021-1387367 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0811 00:30:39.515790 1387850 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0811 00:30:39.524221 1387850 binaries.go:44] Found k8s binaries, skipping transfer
	I0811 00:30:39.524290 1387850 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0811 00:30:39.530941 1387850 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (355 bytes)
	I0811 00:30:39.543732 1387850 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0811 00:30:39.556462 1387850 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2072 bytes)
	I0811 00:30:39.568807 1387850 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0811 00:30:39.572672 1387850 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0811 00:30:39.581434 1387850 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367 for IP: 192.168.49.2
	I0811 00:30:39.581481 1387850 certs.go:183] generating minikubeCA CA: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key
	I0811 00:30:40.153609 1387850 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt ...
	I0811 00:30:40.153643 1387850 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt: {Name:mk59a57628b7830e6da9d2ae7e8c01cd5efde140 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 00:30:40.153894 1387850 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key ...
	I0811 00:30:40.153911 1387850 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key: {Name:mk96e056b1cd3dc0b43035730f08908c26c31fe8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 00:30:40.154044 1387850 certs.go:183] generating proxyClientCA CA: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key
	I0811 00:30:40.471227 1387850 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.crt ...
	I0811 00:30:40.471263 1387850 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.crt: {Name:mkfd778913fc3b0da592cfc8a7d08059e895c701 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 00:30:40.471472 1387850 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key ...
	I0811 00:30:40.471492 1387850 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key: {Name:mk0ce74341fb606236ed0d73a79e2c5cede7537d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 00:30:40.471637 1387850 certs.go:294] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/client.key
	I0811 00:30:40.471650 1387850 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/client.crt with IP's: []
	I0811 00:30:40.932035 1387850 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/client.crt ...
	I0811 00:30:40.932074 1387850 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/client.crt: {Name:mk9fa1e098b232414d6313e801fa75c86c1d49bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 00:30:40.932328 1387850 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/client.key ...
	I0811 00:30:40.932348 1387850 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/client.key: {Name:mkfe24cba1294c2a137e1fca2c7855f1633fb7e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 00:30:40.932465 1387850 certs.go:294] generating minikube signed cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/apiserver.key.dd3b5fb2
	I0811 00:30:40.932477 1387850 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0811 00:30:41.378481 1387850 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/apiserver.crt.dd3b5fb2 ...
	I0811 00:30:41.378518 1387850 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/apiserver.crt.dd3b5fb2: {Name:mk61de60fd373ccc807bd5cda384447d381e8be3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 00:30:41.378737 1387850 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/apiserver.key.dd3b5fb2 ...
	I0811 00:30:41.378752 1387850 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/apiserver.key.dd3b5fb2: {Name:mk28ad1051189a18b59148562d5150391e295b32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 00:30:41.378851 1387850 certs.go:305] copying /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/apiserver.crt
	I0811 00:30:41.378911 1387850 certs.go:309] copying /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/apiserver.key
	I0811 00:30:41.378968 1387850 certs.go:294] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/proxy-client.key
	I0811 00:30:41.378981 1387850 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/proxy-client.crt with IP's: []
	I0811 00:30:42.573038 1387850 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/proxy-client.crt ...
	I0811 00:30:42.573080 1387850 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/proxy-client.crt: {Name:mk0190b4814f268c32de2db03fd82b7d16622974 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 00:30:42.573306 1387850 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/proxy-client.key ...
	I0811 00:30:42.573323 1387850 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/proxy-client.key: {Name:mkb9c7131f1d68ca2e257df72147ba667f820217 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 00:30:42.573512 1387850 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem (1675 bytes)
	I0811 00:30:42.573555 1387850 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem (1082 bytes)
	I0811 00:30:42.573587 1387850 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem (1123 bytes)
	I0811 00:30:42.573617 1387850 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem (1679 bytes)
	I0811 00:30:42.574683 1387850 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0811 00:30:42.592943 1387850 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0811 00:30:42.609946 1387850 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0811 00:30:42.626691 1387850 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0811 00:30:42.643759 1387850 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0811 00:30:42.660446 1387850 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0811 00:30:42.677226 1387850 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0811 00:30:42.693943 1387850 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0811 00:30:42.711059 1387850 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0811 00:30:42.727916 1387850 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0811 00:30:42.740400 1387850 ssh_runner.go:149] Run: openssl version
	I0811 00:30:42.746610 1387850 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0811 00:30:42.755297 1387850 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0811 00:30:42.758347 1387850 certs.go:416] hashing: -rw-r--r-- 1 root root 1111 Aug 11 00:30 /usr/share/ca-certificates/minikubeCA.pem
	I0811 00:30:42.758400 1387850 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0811 00:30:42.763252 1387850 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0811 00:30:42.770353 1387850 kubeadm.go:390] StartCluster: {Name:addons-20210811003021-1387367 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:addons-20210811003021-1387367 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:
[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0811 00:30:42.770495 1387850 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0811 00:30:42.809002 1387850 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0811 00:30:42.816207 1387850 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0811 00:30:42.822961 1387850 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0811 00:30:42.823066 1387850 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0811 00:30:42.830328 1387850 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0811 00:30:42.830370 1387850 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0811 00:30:43.619917 1387850 out.go:204]   - Generating certificates and keys ...
	I0811 00:30:49.880691 1387850 out.go:204]   - Booting up control plane ...
	I0811 00:31:06.451215 1387850 out.go:204]   - Configuring RBAC rules ...
	I0811 00:31:06.874304 1387850 cni.go:93] Creating CNI manager for ""
	I0811 00:31:06.874325 1387850 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0811 00:31:06.874348 1387850 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0811 00:31:06.874455 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:31:06.874510 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=877a5691753f15214a0c269ac69dcdc5a4d99fcd minikube.k8s.io/name=addons-20210811003021-1387367 minikube.k8s.io/updated_at=2021_08_11T00_31_06_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:31:07.395364 1387850 ops.go:34] apiserver oom_adj: -16
	I0811 00:31:07.395478 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:31:07.985651 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:31:08.485872 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:31:08.985765 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:31:09.485151 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:31:09.985899 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:31:10.485129 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:31:10.985624 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:31:11.485105 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:31:11.985253 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:31:12.485152 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:31:12.985351 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:31:13.485134 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:31:13.986075 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:31:14.485781 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:31:14.985900 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:31:15.485861 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:31:15.986014 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:31:16.485778 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:31:16.985653 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:31:17.485947 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:31:17.985276 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:31:18.485896 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:31:18.985977 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:31:19.485799 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:31:19.985256 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:31:20.485459 1387850 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:31:20.663342 1387850 kubeadm.go:985] duration metric: took 13.78893335s to wait for elevateKubeSystemPrivileges.
	I0811 00:31:20.663367 1387850 kubeadm.go:392] StartCluster complete in 37.893022782s
	I0811 00:31:20.663382 1387850 settings.go:142] acquiring lock: {Name:mk6e7f1e95cc0d18801bf31166529399345d1e40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 00:31:20.663521 1387850 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	I0811 00:31:20.663950 1387850 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig: {Name:mka174137207b71bb699e0c641682c96161f87c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 00:31:21.189383 1387850 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "addons-20210811003021-1387367" rescaled to 1
	I0811 00:31:21.189462 1387850 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0811 00:31:21.193170 1387850 out.go:177] * Verifying Kubernetes components...
	I0811 00:31:21.193243 1387850 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0811 00:31:21.189583 1387850 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0811 00:31:21.189906 1387850 addons.go:342] enableAddons start: toEnable=map[], additional=[registry metrics-server olm volumesnapshots csi-hostpath-driver ingress gcp-auth]
	I0811 00:31:21.193403 1387850 addons.go:59] Setting volumesnapshots=true in profile "addons-20210811003021-1387367"
	I0811 00:31:21.193416 1387850 addons.go:135] Setting addon volumesnapshots=true in "addons-20210811003021-1387367"
	I0811 00:31:21.193441 1387850 host.go:66] Checking if "addons-20210811003021-1387367" exists ...
	I0811 00:31:21.193953 1387850 cli_runner.go:115] Run: docker container inspect addons-20210811003021-1387367 --format={{.State.Status}}
	I0811 00:31:21.194243 1387850 addons.go:59] Setting ingress=true in profile "addons-20210811003021-1387367"
	I0811 00:31:21.194260 1387850 addons.go:135] Setting addon ingress=true in "addons-20210811003021-1387367"
	I0811 00:31:21.194284 1387850 host.go:66] Checking if "addons-20210811003021-1387367" exists ...
	I0811 00:31:21.194705 1387850 cli_runner.go:115] Run: docker container inspect addons-20210811003021-1387367 --format={{.State.Status}}
	I0811 00:31:21.194767 1387850 addons.go:59] Setting csi-hostpath-driver=true in profile "addons-20210811003021-1387367"
	I0811 00:31:21.194790 1387850 addons.go:135] Setting addon csi-hostpath-driver=true in "addons-20210811003021-1387367"
	I0811 00:31:21.194811 1387850 host.go:66] Checking if "addons-20210811003021-1387367" exists ...
	I0811 00:31:21.195183 1387850 cli_runner.go:115] Run: docker container inspect addons-20210811003021-1387367 --format={{.State.Status}}
	I0811 00:31:21.195236 1387850 addons.go:59] Setting default-storageclass=true in profile "addons-20210811003021-1387367"
	I0811 00:31:21.195247 1387850 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-20210811003021-1387367"
	I0811 00:31:21.195465 1387850 cli_runner.go:115] Run: docker container inspect addons-20210811003021-1387367 --format={{.State.Status}}
	I0811 00:31:21.195519 1387850 addons.go:59] Setting gcp-auth=true in profile "addons-20210811003021-1387367"
	I0811 00:31:21.195539 1387850 mustload.go:65] Loading cluster: addons-20210811003021-1387367
	I0811 00:31:21.195857 1387850 cli_runner.go:115] Run: docker container inspect addons-20210811003021-1387367 --format={{.State.Status}}
	I0811 00:31:21.195907 1387850 addons.go:59] Setting olm=true in profile "addons-20210811003021-1387367"
	I0811 00:31:21.195915 1387850 addons.go:135] Setting addon olm=true in "addons-20210811003021-1387367"
	I0811 00:31:21.195931 1387850 host.go:66] Checking if "addons-20210811003021-1387367" exists ...
	I0811 00:31:21.196301 1387850 cli_runner.go:115] Run: docker container inspect addons-20210811003021-1387367 --format={{.State.Status}}
	I0811 00:31:21.196350 1387850 addons.go:59] Setting metrics-server=true in profile "addons-20210811003021-1387367"
	I0811 00:31:21.196358 1387850 addons.go:135] Setting addon metrics-server=true in "addons-20210811003021-1387367"
	I0811 00:31:21.196372 1387850 host.go:66] Checking if "addons-20210811003021-1387367" exists ...
	I0811 00:31:21.196738 1387850 cli_runner.go:115] Run: docker container inspect addons-20210811003021-1387367 --format={{.State.Status}}
	I0811 00:31:21.196790 1387850 addons.go:59] Setting registry=true in profile "addons-20210811003021-1387367"
	I0811 00:31:21.196797 1387850 addons.go:135] Setting addon registry=true in "addons-20210811003021-1387367"
	I0811 00:31:21.196812 1387850 host.go:66] Checking if "addons-20210811003021-1387367" exists ...
	I0811 00:31:21.197403 1387850 cli_runner.go:115] Run: docker container inspect addons-20210811003021-1387367 --format={{.State.Status}}
	I0811 00:31:21.197413 1387850 addons.go:59] Setting storage-provisioner=true in profile "addons-20210811003021-1387367"
	I0811 00:31:21.197526 1387850 addons.go:135] Setting addon storage-provisioner=true in "addons-20210811003021-1387367"
	W0811 00:31:21.197549 1387850 addons.go:147] addon storage-provisioner should already be in state true
	I0811 00:31:21.197579 1387850 host.go:66] Checking if "addons-20210811003021-1387367" exists ...
	I0811 00:31:21.198079 1387850 cli_runner.go:115] Run: docker container inspect addons-20210811003021-1387367 --format={{.State.Status}}
	I0811 00:31:21.316737 1387850 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0811 00:31:21.318960 1387850 out.go:177]   - Using image k8s.gcr.io/ingress-nginx/controller:v0.44.0
	I0811 00:31:21.321029 1387850 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0811 00:31:21.321081 1387850 addons.go:275] installing /etc/kubernetes/addons/ingress-configmap.yaml
	I0811 00:31:21.321090 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-configmap.yaml (1865 bytes)
	I0811 00:31:21.321153 1387850 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210811003021-1387367
	I0811 00:31:21.393126 1387850 out.go:177]   - Using image k8s.gcr.io/sig-storage/snapshot-controller:v4.0.0
	I0811 00:31:21.393208 1387850 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0811 00:31:21.393219 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0811 00:31:21.393552 1387850 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210811003021-1387367
	I0811 00:31:21.566994 1387850 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-controller:v0.2.0
	I0811 00:31:21.570897 1387850 out.go:177]   - Using image k8s.gcr.io/sig-storage/hostpathplugin:v1.6.0
	I0811 00:31:21.573431 1387850 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0
	I0811 00:31:21.575941 1387850 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-attacher:v3.1.0
	I0811 00:31:21.577933 1387850 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1
	I0811 00:31:21.589807 1387850 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0
	I0811 00:31:21.590689 1387850 host.go:66] Checking if "addons-20210811003021-1387367" exists ...
	I0811 00:31:21.593774 1387850 addons.go:135] Setting addon default-storageclass=true in "addons-20210811003021-1387367"
	W0811 00:31:21.593807 1387850 addons.go:147] addon default-storageclass should already be in state true
	I0811 00:31:21.593832 1387850 host.go:66] Checking if "addons-20210811003021-1387367" exists ...
	I0811 00:31:21.594305 1387850 cli_runner.go:115] Run: docker container inspect addons-20210811003021-1387367 --format={{.State.Status}}
	I0811 00:31:21.594464 1387850 out.go:177]   - Using image k8s.gcr.io/sig-storage/livenessprobe:v2.2.0
	I0811 00:31:21.602078 1387850 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-resizer:v1.1.0
	I0811 00:31:21.594805 1387850 out.go:177]   - Using image quay.io/operator-framework/upstream-community-operators:07bbc13
	I0811 00:31:21.613391 1387850 out.go:177]   - Using image quay.io/operator-framework/olm:v0.17.0
	I0811 00:31:21.610752 1387850 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-agent:v0.2.0
	I0811 00:31:21.634506 1387850 addons.go:275] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0811 00:31:21.634522 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0811 00:31:21.634580 1387850 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210811003021-1387367
	I0811 00:31:21.610760 1387850 out.go:177]   - Using image k8s.gcr.io/metrics-server/metrics-server:v0.4.2
	I0811 00:31:21.635648 1387850 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0811 00:31:21.635657 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0811 00:31:21.635701 1387850 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210811003021-1387367
	I0811 00:31:21.644033 1387850 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0811 00:31:21.644148 1387850 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0811 00:31:21.644157 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0811 00:31:21.644215 1387850 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210811003021-1387367
	I0811 00:31:21.659467 1387850 out.go:177]   - Using image registry:2.7.1
	I0811 00:31:21.663767 1387850 out.go:177]   - Using image gcr.io/google_containers/kube-registry-proxy:0.4
	I0811 00:31:21.665207 1387850 addons.go:275] installing /etc/kubernetes/addons/registry-rc.yaml
	I0811 00:31:21.665236 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (788 bytes)
	I0811 00:31:21.665321 1387850 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210811003021-1387367
	I0811 00:31:21.715375 1387850 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50250 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210811003021-1387367/id_rsa Username:docker}
	I0811 00:31:21.740338 1387850 ssh_runner.go:316] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0811 00:31:21.747616 1387850 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210811003021-1387367
	I0811 00:31:21.767963 1387850 addons.go:275] installing /etc/kubernetes/addons/crds.yaml
	I0811 00:31:21.767996 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/crds.yaml (825331 bytes)
	I0811 00:31:21.768070 1387850 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210811003021-1387367
	I0811 00:31:21.821081 1387850 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50250 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210811003021-1387367/id_rsa Username:docker}
	I0811 00:31:21.893062 1387850 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0811 00:31:21.894732 1387850 node_ready.go:35] waiting up to 6m0s for node "addons-20210811003021-1387367" to be "Ready" ...
	I0811 00:31:21.900813 1387850 node_ready.go:49] node "addons-20210811003021-1387367" has status "Ready":"True"
	I0811 00:31:21.900877 1387850 node_ready.go:38] duration metric: took 6.121847ms waiting for node "addons-20210811003021-1387367" to be "Ready" ...
	I0811 00:31:21.900891 1387850 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0811 00:31:21.970648 1387850 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-5wk4c" in "kube-system" namespace to be "Ready" ...
	I0811 00:31:21.971138 1387850 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50250 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210811003021-1387367/id_rsa Username:docker}
	I0811 00:31:21.988186 1387850 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50250 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210811003021-1387367/id_rsa Username:docker}
	I0811 00:31:21.998543 1387850 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50250 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210811003021-1387367/id_rsa Username:docker}
	I0811 00:31:22.022093 1387850 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50250 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210811003021-1387367/id_rsa Username:docker}
	I0811 00:31:22.024598 1387850 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50250 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210811003021-1387367/id_rsa Username:docker}
	I0811 00:31:22.025389 1387850 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0811 00:31:22.025403 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0811 00:31:22.025452 1387850 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210811003021-1387367
	I0811 00:31:22.089084 1387850 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50250 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210811003021-1387367/id_rsa Username:docker}
	I0811 00:31:22.145093 1387850 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50250 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210811003021-1387367/id_rsa Username:docker}
	I0811 00:31:22.189558 1387850 addons.go:275] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0811 00:31:22.189581 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0811 00:31:22.302897 1387850 addons.go:275] installing /etc/kubernetes/addons/ingress-rbac.yaml
	I0811 00:31:22.302958 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-rbac.yaml (6005 bytes)
	I0811 00:31:22.410765 1387850 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0811 00:31:22.422948 1387850 addons.go:275] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0811 00:31:22.423015 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0811 00:31:22.426280 1387850 ssh_runner.go:316] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0811 00:31:22.432260 1387850 addons.go:275] installing /etc/kubernetes/addons/olm.yaml
	I0811 00:31:22.432318 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/olm.yaml (9882 bytes)
	I0811 00:31:22.436077 1387850 addons.go:275] installing /etc/kubernetes/addons/registry-svc.yaml
	I0811 00:31:22.436129 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0811 00:31:22.444025 1387850 addons.go:135] Setting addon gcp-auth=true in "addons-20210811003021-1387367"
	I0811 00:31:22.444083 1387850 host.go:66] Checking if "addons-20210811003021-1387367" exists ...
	I0811 00:31:22.444621 1387850 cli_runner.go:115] Run: docker container inspect addons-20210811003021-1387367 --format={{.State.Status}}
	I0811 00:31:22.494171 1387850 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0811 00:31:22.494196 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1931 bytes)
	I0811 00:31:22.507387 1387850 out.go:177]   - Using image jettech/kube-webhook-certgen:v1.3.0
	I0811 00:31:22.509963 1387850 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.0.6
	I0811 00:31:22.510021 1387850 addons.go:275] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0811 00:31:22.510031 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0811 00:31:22.510090 1387850 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210811003021-1387367
	I0811 00:31:22.537941 1387850 addons.go:275] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0811 00:31:22.537962 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19584 bytes)
	I0811 00:31:22.559492 1387850 addons.go:275] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0811 00:31:22.559512 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3428 bytes)
	I0811 00:31:22.566425 1387850 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50250 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/addons-20210811003021-1387367/id_rsa Username:docker}
	I0811 00:31:22.567873 1387850 addons.go:275] installing /etc/kubernetes/addons/ingress-dp.yaml
	I0811 00:31:22.567891 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-dp.yaml (9394 bytes)
	I0811 00:31:22.624805 1387850 addons.go:275] installing /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml
	I0811 00:31:22.624829 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml (2203 bytes)
	I0811 00:31:22.720992 1387850 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml
	I0811 00:31:22.724130 1387850 addons.go:275] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0811 00:31:22.724148 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1071 bytes)
	I0811 00:31:22.727176 1387850 addons.go:275] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0811 00:31:22.727192 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (950 bytes)
	I0811 00:31:22.729946 1387850 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0811 00:31:22.764118 1387850 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0811 00:31:22.764137 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0811 00:31:22.774485 1387850 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/ingress-configmap.yaml -f /etc/kubernetes/addons/ingress-rbac.yaml -f /etc/kubernetes/addons/ingress-dp.yaml
	I0811 00:31:22.812352 1387850 addons.go:275] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0811 00:31:22.812417 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3037 bytes)
	I0811 00:31:22.870611 1387850 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0811 00:31:22.917662 1387850 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0811 00:31:22.917720 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0811 00:31:22.943400 1387850 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0811 00:31:23.037187 1387850 addons.go:275] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0811 00:31:23.037246 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (3666 bytes)
	I0811 00:31:23.100781 1387850 addons.go:275] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0811 00:31:23.100840 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (770 bytes)
	I0811 00:31:23.124682 1387850 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0811 00:31:23.217383 1387850 addons.go:275] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0811 00:31:23.217443 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2944 bytes)
	I0811 00:31:23.268107 1387850 addons.go:275] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0811 00:31:23.268163 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (4755 bytes)
	I0811 00:31:23.349221 1387850 addons.go:275] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0811 00:31:23.349241 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3194 bytes)
	I0811 00:31:23.433746 1387850 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0811 00:31:23.466200 1387850 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0811 00:31:23.466266 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2421 bytes)
	I0811 00:31:23.568633 1387850 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0811 00:31:23.568690 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1034 bytes)
	I0811 00:31:23.750358 1387850 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0811 00:31:23.750414 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (6710 bytes)
	I0811 00:31:23.791988 1387850 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.898894924s)
	I0811 00:31:23.792052 1387850 start.go:736] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0811 00:31:23.955311 1387850 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-provisioner.yaml
	I0811 00:31:23.955374 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-provisioner.yaml (2555 bytes)
	I0811 00:31:23.956419 1387850 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.545593968s)
	I0811 00:31:24.082471 1387850 pod_ready.go:102] pod "coredns-558bd4d5db-5wk4c" in "kube-system" namespace has status "Ready":"False"
	I0811 00:31:24.160978 1387850 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0811 00:31:24.161066 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2469 bytes)
	I0811 00:31:24.203405 1387850 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml
	I0811 00:31:24.203429 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml (2555 bytes)
	I0811 00:31:24.411828 1387850 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0811 00:31:24.411854 1387850 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0811 00:31:24.432074 1387850 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-provisioner.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0811 00:31:26.152298 1387850 pod_ready.go:102] pod "coredns-558bd4d5db-5wk4c" in "kube-system" namespace has status "Ready":"False"
	I0811 00:31:28.575273 1387850 pod_ready.go:102] pod "coredns-558bd4d5db-5wk4c" in "kube-system" namespace has status "Ready":"False"
	I0811 00:31:31.053360 1387850 pod_ready.go:102] pod "coredns-558bd4d5db-5wk4c" in "kube-system" namespace has status "Ready":"False"
	I0811 00:31:31.587406 1387850 pod_ready.go:97] error getting pod "coredns-558bd4d5db-5wk4c" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-5wk4c" not found
	I0811 00:31:31.587437 1387850 pod_ready.go:81] duration metric: took 9.616760181s waiting for pod "coredns-558bd4d5db-5wk4c" in "kube-system" namespace to be "Ready" ...
	E0811 00:31:31.587449 1387850 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-558bd4d5db-5wk4c" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-5wk4c" not found
	I0811 00:31:31.587458 1387850 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-j4xjh" in "kube-system" namespace to be "Ready" ...
	I0811 00:31:31.759414 1387850 pod_ready.go:92] pod "coredns-558bd4d5db-j4xjh" in "kube-system" namespace has status "Ready":"True"
	I0811 00:31:31.759439 1387850 pod_ready.go:81] duration metric: took 171.972167ms waiting for pod "coredns-558bd4d5db-j4xjh" in "kube-system" namespace to be "Ready" ...
	I0811 00:31:31.759450 1387850 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-20210811003021-1387367" in "kube-system" namespace to be "Ready" ...
	I0811 00:31:31.874331 1387850 pod_ready.go:92] pod "etcd-addons-20210811003021-1387367" in "kube-system" namespace has status "Ready":"True"
	I0811 00:31:31.874356 1387850 pod_ready.go:81] duration metric: took 114.898034ms waiting for pod "etcd-addons-20210811003021-1387367" in "kube-system" namespace to be "Ready" ...
	I0811 00:31:31.874369 1387850 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-20210811003021-1387367" in "kube-system" namespace to be "Ready" ...
	I0811 00:31:31.877164 1387850 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.147157564s)
	I0811 00:31:31.877240 1387850 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: (9.156226911s)
	W0811 00:31:31.877276 1387850 addons.go:296] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created
	namespace/olm created
	namespace/operators created
	serviceaccount/olm-operator-serviceaccount created
	clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager created
	clusterrolebinding.rbac.authorization.k8s.io/olm-operator-binding-olm created
	deployment.apps/olm-operator created
	deployment.apps/catalog-operator created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-view created
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "ClusterServiceVersion" in version "operators.coreos.com/v1alpha1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "CatalogSource" in version "operators.coreos.com/v1alpha1"
	I0811 00:31:31.877292 1387850 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created
	namespace/olm created
	namespace/operators created
	serviceaccount/olm-operator-serviceaccount created
	clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager created
	clusterrolebinding.rbac.authorization.k8s.io/olm-operator-binding-olm created
	deployment.apps/olm-operator created
	deployment.apps/catalog-operator created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-view created
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "ClusterServiceVersion" in version "operators.coreos.com/v1alpha1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "CatalogSource" in version "operators.coreos.com/v1alpha1"
	I0811 00:31:31.877402 1387850 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/ingress-configmap.yaml -f /etc/kubernetes/addons/ingress-rbac.yaml -f /etc/kubernetes/addons/ingress-dp.yaml: (9.102897402s)
	I0811 00:31:31.877416 1387850 addons.go:313] Verifying addon ingress=true in "addons-20210811003021-1387367"
	I0811 00:31:31.887205 1387850 out.go:177] * Verifying ingress addon...
	I0811 00:31:31.889037 1387850 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0811 00:31:31.877748 1387850 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (9.007067811s)
	W0811 00:31:31.889240 1387850 addons.go:296] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: unable to recognize "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	I0811 00:31:31.889258 1387850 retry.go:31] will retry after 360.127272ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: unable to recognize "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	I0811 00:31:31.877785 1387850 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.934328602s)
	I0811 00:31:31.889280 1387850 addons.go:313] Verifying addon registry=true in "addons-20210811003021-1387367"
	I0811 00:31:31.877850 1387850 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.753144936s)
	I0811 00:31:31.877905 1387850 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (8.444139497s)
	I0811 00:31:31.878122 1387850 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-provisioner.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.446019981s)
	I0811 00:31:31.893287 1387850 addons.go:313] Verifying addon metrics-server=true in "addons-20210811003021-1387367"
	I0811 00:31:31.893306 1387850 out.go:177] * Verifying registry addon...
	I0811 00:31:31.894964 1387850 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0811 00:31:31.895140 1387850 addons.go:313] Verifying addon gcp-auth=true in "addons-20210811003021-1387367"
	I0811 00:31:31.897634 1387850 out.go:177] * Verifying gcp-auth addon...
	I0811 00:31:31.899263 1387850 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0811 00:31:31.893256 1387850 addons.go:313] Verifying addon csi-hostpath-driver=true in "addons-20210811003021-1387367"
	I0811 00:31:31.902262 1387850 out.go:177] * Verifying csi-hostpath-driver addon...
	I0811 00:31:31.903868 1387850 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0811 00:31:31.957269 1387850 pod_ready.go:92] pod "kube-apiserver-addons-20210811003021-1387367" in "kube-system" namespace has status "Ready":"True"
	I0811 00:31:31.957291 1387850 pod_ready.go:81] duration metric: took 82.914978ms waiting for pod "kube-apiserver-addons-20210811003021-1387367" in "kube-system" namespace to be "Ready" ...
	I0811 00:31:31.957302 1387850 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-20210811003021-1387367" in "kube-system" namespace to be "Ready" ...
	I0811 00:31:31.969760 1387850 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0811 00:31:31.969785 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:31.973452 1387850 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0811 00:31:31.973479 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 00:31:31.973901 1387850 kapi.go:86] Found 2 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0811 00:31:31.973912 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:31.974638 1387850 kapi.go:86] Found 5 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0811 00:31:31.974650 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:31.983912 1387850 pod_ready.go:92] pod "kube-controller-manager-addons-20210811003021-1387367" in "kube-system" namespace has status "Ready":"True"
	I0811 00:31:31.983934 1387850 pod_ready.go:81] duration metric: took 26.622888ms waiting for pod "kube-controller-manager-addons-20210811003021-1387367" in "kube-system" namespace to be "Ready" ...
	I0811 00:31:31.983947 1387850 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hbv8p" in "kube-system" namespace to be "Ready" ...
	I0811 00:31:31.999660 1387850 pod_ready.go:92] pod "kube-proxy-hbv8p" in "kube-system" namespace has status "Ready":"True"
	I0811 00:31:31.999681 1387850 pod_ready.go:81] duration metric: took 15.72646ms waiting for pod "kube-proxy-hbv8p" in "kube-system" namespace to be "Ready" ...
	I0811 00:31:31.999692 1387850 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-20210811003021-1387367" in "kube-system" namespace to be "Ready" ...
	I0811 00:31:32.134974 1387850 pod_ready.go:92] pod "kube-scheduler-addons-20210811003021-1387367" in "kube-system" namespace has status "Ready":"True"
	I0811 00:31:32.134996 1387850 pod_ready.go:81] duration metric: took 135.293862ms waiting for pod "kube-scheduler-addons-20210811003021-1387367" in "kube-system" namespace to be "Ready" ...
	I0811 00:31:32.135007 1387850 pod_ready.go:38] duration metric: took 10.234102984s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0811 00:31:32.135022 1387850 api_server.go:50] waiting for apiserver process to appear ...
	I0811 00:31:32.135065 1387850 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0811 00:31:32.157221 1387850 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml
	I0811 00:31:32.250282 1387850 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0811 00:31:32.278457 1387850 api_server.go:70] duration metric: took 11.088961421s to wait for apiserver process to appear ...
	I0811 00:31:32.278478 1387850 api_server.go:86] waiting for apiserver healthz status ...
	I0811 00:31:32.278488 1387850 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0811 00:31:32.307214 1387850 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0811 00:31:32.308374 1387850 api_server.go:139] control plane version: v1.21.3
	I0811 00:31:32.308394 1387850 api_server.go:129] duration metric: took 29.908897ms to wait for apiserver health ...
	I0811 00:31:32.308401 1387850 system_pods.go:43] waiting for kube-system pods to appear ...
	I0811 00:31:32.326005 1387850 system_pods.go:59] 17 kube-system pods found
	I0811 00:31:32.326040 1387850 system_pods.go:61] "coredns-558bd4d5db-j4xjh" [f948b5ac-414e-4239-ad46-497ef8f75853] Running
	I0811 00:31:32.326045 1387850 system_pods.go:61] "csi-hostpath-attacher-0" [2f0d8b28-ddaf-458b-b5b5-8b3c07c09415] Pending
	I0811 00:31:32.326050 1387850 system_pods.go:61] "csi-hostpath-provisioner-0" [1ec66ec1-bc43-458c-aec5-9987f687ac44] Pending
	I0811 00:31:32.326055 1387850 system_pods.go:61] "csi-hostpath-resizer-0" [79bc0e72-5889-4c3f-8670-8c2c53610472] Pending
	I0811 00:31:32.326060 1387850 system_pods.go:61] "csi-hostpath-snapshotter-0" [adee0893-0da6-42b1-b77a-115426aeb95d] Pending
	I0811 00:31:32.326065 1387850 system_pods.go:61] "csi-hostpathplugin-0" [6c1cecb2-45cd-41c0-b435-d9d52972488e] Pending
	I0811 00:31:32.326070 1387850 system_pods.go:61] "etcd-addons-20210811003021-1387367" [66a09e0e-6be7-443c-8a42-6f5c84c19094] Running
	I0811 00:31:32.326076 1387850 system_pods.go:61] "kube-apiserver-addons-20210811003021-1387367" [9691ed48-418f-4dad-8ac3-30d61a430bbf] Running
	I0811 00:31:32.326085 1387850 system_pods.go:61] "kube-controller-manager-addons-20210811003021-1387367" [3b729013-1dc6-4788-9f3c-f7aa402e59e1] Running
	I0811 00:31:32.326089 1387850 system_pods.go:61] "kube-proxy-hbv8p" [368541dc-ff39-4aee-af59-de331b32e889] Running
	I0811 00:31:32.326099 1387850 system_pods.go:61] "kube-scheduler-addons-20210811003021-1387367" [44340cdb-fad4-460c-994c-cf7586c7cb72] Running
	I0811 00:31:32.326106 1387850 system_pods.go:61] "metrics-server-77c99ccb96-7bz4t" [f135d883-ab80-4dd8-a141-333424152bcb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0811 00:31:32.326114 1387850 system_pods.go:61] "registry-dzdlw" [4a872b2d-a2b1-46f9-9afd-c52b6647383f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0811 00:31:32.326124 1387850 system_pods.go:61] "registry-proxy-xfrxz" [19d31762-bc36-413f-8533-e97b57d38a28] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0811 00:31:32.326132 1387850 system_pods.go:61] "snapshot-controller-989f9ddc8-f8q5j" [b992001c-a1c6-4425-b360-98696726a82a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0811 00:31:32.326144 1387850 system_pods.go:61] "snapshot-controller-989f9ddc8-pjvmj" [9502385f-ad82-4081-bc88-a44d574dad9c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0811 00:31:32.326150 1387850 system_pods.go:61] "storage-provisioner" [f5eb4d07-1355-48c6-aa1c-17031e9d86b9] Running
	I0811 00:31:32.326160 1387850 system_pods.go:74] duration metric: took 17.753408ms to wait for pod list to return data ...
	I0811 00:31:32.326168 1387850 default_sa.go:34] waiting for default service account to be created ...
	I0811 00:31:32.473443 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:32.493205 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:32.493584 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 00:31:32.494370 1387850 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0811 00:31:32.494390 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:32.510135 1387850 default_sa.go:45] found service account: "default"
	I0811 00:31:32.510161 1387850 default_sa.go:55] duration metric: took 183.984313ms for default service account to be created ...
	I0811 00:31:32.510170 1387850 system_pods.go:116] waiting for k8s-apps to be running ...
	I0811 00:31:32.777565 1387850 system_pods.go:86] 17 kube-system pods found
	I0811 00:31:32.777597 1387850 system_pods.go:89] "coredns-558bd4d5db-j4xjh" [f948b5ac-414e-4239-ad46-497ef8f75853] Running
	I0811 00:31:32.777605 1387850 system_pods.go:89] "csi-hostpath-attacher-0" [2f0d8b28-ddaf-458b-b5b5-8b3c07c09415] Pending
	I0811 00:31:32.777610 1387850 system_pods.go:89] "csi-hostpath-provisioner-0" [1ec66ec1-bc43-458c-aec5-9987f687ac44] Pending
	I0811 00:31:32.777615 1387850 system_pods.go:89] "csi-hostpath-resizer-0" [79bc0e72-5889-4c3f-8670-8c2c53610472] Pending
	I0811 00:31:32.777620 1387850 system_pods.go:89] "csi-hostpath-snapshotter-0" [adee0893-0da6-42b1-b77a-115426aeb95d] Pending
	I0811 00:31:32.777629 1387850 system_pods.go:89] "csi-hostpathplugin-0" [6c1cecb2-45cd-41c0-b435-d9d52972488e] Pending
	I0811 00:31:32.777634 1387850 system_pods.go:89] "etcd-addons-20210811003021-1387367" [66a09e0e-6be7-443c-8a42-6f5c84c19094] Running
	I0811 00:31:32.777645 1387850 system_pods.go:89] "kube-apiserver-addons-20210811003021-1387367" [9691ed48-418f-4dad-8ac3-30d61a430bbf] Running
	I0811 00:31:32.777652 1387850 system_pods.go:89] "kube-controller-manager-addons-20210811003021-1387367" [3b729013-1dc6-4788-9f3c-f7aa402e59e1] Running
	I0811 00:31:32.777661 1387850 system_pods.go:89] "kube-proxy-hbv8p" [368541dc-ff39-4aee-af59-de331b32e889] Running
	I0811 00:31:32.777666 1387850 system_pods.go:89] "kube-scheduler-addons-20210811003021-1387367" [44340cdb-fad4-460c-994c-cf7586c7cb72] Running
	I0811 00:31:32.777680 1387850 system_pods.go:89] "metrics-server-77c99ccb96-7bz4t" [f135d883-ab80-4dd8-a141-333424152bcb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0811 00:31:32.777693 1387850 system_pods.go:89] "registry-dzdlw" [4a872b2d-a2b1-46f9-9afd-c52b6647383f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0811 00:31:32.777708 1387850 system_pods.go:89] "registry-proxy-xfrxz" [19d31762-bc36-413f-8533-e97b57d38a28] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0811 00:31:32.777716 1387850 system_pods.go:89] "snapshot-controller-989f9ddc8-f8q5j" [b992001c-a1c6-4425-b360-98696726a82a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0811 00:31:32.777724 1387850 system_pods.go:89] "snapshot-controller-989f9ddc8-pjvmj" [9502385f-ad82-4081-bc88-a44d574dad9c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0811 00:31:32.777730 1387850 system_pods.go:89] "storage-provisioner" [f5eb4d07-1355-48c6-aa1c-17031e9d86b9] Running
	I0811 00:31:32.777737 1387850 system_pods.go:126] duration metric: took 267.562785ms to wait for k8s-apps to be running ...
	I0811 00:31:32.777744 1387850 system_svc.go:44] waiting for kubelet service to be running ....
	I0811 00:31:32.777795 1387850 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0811 00:31:33.022664 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:33.043137 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:33.043695 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 00:31:33.076210 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:33.475631 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:33.487013 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 00:31:33.487474 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:33.488225 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:33.973986 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:33.978066 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:33.978643 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 00:31:33.982551 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:34.475328 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:34.484737 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:34.485574 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 00:31:34.491047 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:34.978754 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:34.986378 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:35.002282 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:35.003252 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 00:31:35.492406 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:35.498168 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:35.498801 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 00:31:35.507714 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:35.849572 1387850 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: (3.692315583s)
	I0811 00:31:35.849751 1387850 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.599438997s)
	I0811 00:31:35.849801 1387850 ssh_runner.go:189] Completed: sudo systemctl is-active --quiet service kubelet: (3.07199149s)
	I0811 00:31:35.849823 1387850 system_svc.go:56] duration metric: took 3.072076781s WaitForService to wait for kubelet.
	I0811 00:31:35.849853 1387850 kubeadm.go:547] duration metric: took 14.66035318s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0811 00:31:35.849889 1387850 node_conditions.go:102] verifying NodePressure condition ...
	I0811 00:31:35.856082 1387850 node_conditions.go:122] node storage ephemeral capacity is 60796312Ki
	I0811 00:31:35.856156 1387850 node_conditions.go:123] node cpu capacity is 2
	I0811 00:31:35.856183 1387850 node_conditions.go:105] duration metric: took 6.277447ms to run NodePressure ...
	I0811 00:31:35.856204 1387850 start.go:231] waiting for startup goroutines ...
	I0811 00:31:36.005256 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:36.013661 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:36.014789 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 00:31:36.015952 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:36.473926 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:36.483249 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:36.491648 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 00:31:36.492571 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:36.974069 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:36.986531 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:37.006446 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 00:31:37.013357 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:37.489649 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 00:31:37.490803 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:37.491434 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:37.504790 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:37.975170 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:37.989580 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 00:31:37.989802 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:38.025303 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:38.474918 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:38.492730 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 00:31:38.496663 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:38.497970 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:38.973085 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:38.978997 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 00:31:38.979735 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:38.982227 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:39.474799 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:39.481528 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:39.481921 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 00:31:39.484317 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:39.972804 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:39.978286 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 00:31:39.979517 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:39.981143 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:40.474715 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:40.481427 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:40.488665 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 00:31:40.494651 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:40.976036 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:40.980427 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:40.983056 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 00:31:40.987033 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:41.476447 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:41.479669 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:41.480016 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 00:31:41.483594 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:41.973617 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:41.978089 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 00:31:41.982266 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:41.984663 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:42.472736 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:42.487219 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 00:31:42.492985 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:42.493947 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:42.973539 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:42.982946 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:42.986923 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:42.987907 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 00:31:43.473857 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:43.479414 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 00:31:43.481641 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:43.483870 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:43.973517 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:43.982133 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:43.983164 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:43.983422 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 00:31:44.473837 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:44.479785 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:44.484353 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 00:31:44.488415 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:44.979994 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:44.985076 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:44.986656 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 00:31:44.992396 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:45.473966 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:45.481107 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:45.481941 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:45.487107 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 00:31:45.988284 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:46.002665 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 00:31:46.002780 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:46.007448 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:46.474794 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:46.482191 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:46.485676 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:46.487225 1387850 kapi.go:108] duration metric: took 14.592259009s to wait for kubernetes.io/minikube-addons=registry ...
	I0811 00:31:46.974885 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:46.981478 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:46.989618 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:47.487399 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:47.491281 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:47.492579 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:47.975251 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:47.985471 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:47.986668 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:48.490136 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:48.490788 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:48.506411 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:48.973392 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:48.977674 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:48.981056 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:49.474082 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:49.481091 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:49.485354 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:49.973955 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:49.996161 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:49.997622 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:50.475171 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:50.524381 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:50.525120 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:50.974322 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:50.982138 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:50.982860 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:51.474535 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:51.479680 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:51.480476 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:51.973249 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:51.978600 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:51.981854 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:52.473226 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:52.477902 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:52.482802 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:52.973434 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:52.978777 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:52.980477 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:53.473319 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:53.477756 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:53.487033 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:53.973687 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:53.978698 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:53.981280 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:54.475089 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:54.481790 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:54.482384 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:54.985792 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:54.988767 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:54.990896 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:55.473392 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:55.477987 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:55.481815 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:55.973541 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:55.977681 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:55.980675 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:56.474324 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:56.481953 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:56.484326 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:56.974064 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:56.982020 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:56.982410 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:57.473755 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:57.481929 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:57.482893 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:57.973431 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:57.982154 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:57.982719 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:58.473822 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:58.481555 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:58.482077 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:58.973492 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:58.979950 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:58.981978 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:59.473887 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:59.477935 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:59.481649 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:31:59.972952 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:31:59.982204 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:31:59.986360 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:32:00.473567 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:32:00.480713 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:32:00.483313 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:00.973469 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:32:00.977297 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:00.980898 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:32:01.495843 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:32:01.499200 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:01.504461 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:32:01.973397 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:32:01.977703 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:01.981521 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:32:02.473108 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:32:02.480024 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:32:02.480860 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:02.973624 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:32:02.978280 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:02.981042 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:32:03.473799 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:32:03.480741 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:32:03.481368 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:03.972770 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:32:03.983282 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:03.984249 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:32:04.491095 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:32:04.492784 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:04.492970 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:32:04.974758 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:32:04.984471 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:32:04.985330 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:05.472850 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:32:05.477703 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:05.482839 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:32:05.973192 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 00:32:05.978328 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:05.980518 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:32:06.473106 1387850 kapi.go:108] duration metric: took 34.573837309s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0811 00:32:06.475346 1387850 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-20210811003021-1387367 cluster.
	I0811 00:32:06.477505 1387850 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0811 00:32:06.479544 1387850 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0811 00:32:06.481981 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:32:06.487657 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:06.981307 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:32:06.981745 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:07.485847 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:07.488036 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:32:07.980464 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:07.982199 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:32:08.479056 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:08.487132 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:32:08.983438 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:08.989743 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:32:09.478278 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:09.482809 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:32:09.977227 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:09.981531 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:32:10.479714 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:10.480748 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:32:10.980089 1387850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 00:32:10.980855 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:11.479066 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:11.480977 1387850 kapi.go:108] duration metric: took 39.577106963s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0811 00:32:11.978497 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:12.477434 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:12.977568 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:13.478968 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:13.978087 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:14.478651 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:14.978215 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:15.479057 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:15.978520 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:16.478308 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:16.977654 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:17.478970 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:17.978263 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:18.479065 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:18.978518 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:19.477635 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:19.978219 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:20.482858 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:20.978313 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:21.479093 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:21.977637 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:22.477774 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:22.977780 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:23.478605 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:23.977808 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:24.477511 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:24.977509 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:25.480776 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:25.978310 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:26.478828 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:26.978037 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:27.478852 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:27.978490 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:28.485097 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:28.978352 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:29.482810 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:29.978509 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:30.478065 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:30.978093 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:31.478736 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:31.978713 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:32.478985 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:32.977984 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:33.478992 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:33.978850 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:34.482336 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:34.978545 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:35.478663 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:35.978031 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:36.478559 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:36.979038 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:37.478422 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:37.978166 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:38.478873 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:38.977654 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:39.482663 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:39.977950 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:40.482296 1387850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 00:32:40.978478 1387850 kapi.go:108] duration metric: took 1m9.089435631s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0811 00:32:40.981216 1387850 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, olm, volumesnapshots, registry, gcp-auth, csi-hostpath-driver, ingress
	I0811 00:32:40.981241 1387850 addons.go:344] enableAddons completed in 1m19.791344476s
	I0811 00:32:41.039327 1387850 start.go:462] kubectl: 1.21.3, cluster: 1.21.3 (minor skew: 0)
	I0811 00:32:41.041912 1387850 out.go:177] * Done! kubectl is now configured to use "addons-20210811003021-1387367" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2021-08-11 00:30:27 UTC, end at Wed 2021-08-11 00:44:52 UTC. --
	Aug 11 00:36:24 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:36:24.644801971Z" level=info msg="ignoring event" container=3b180fbf110d5465c38fc9fe32281d9ce9049c9e42c84bf6976db412d188ab7a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 00:36:24 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:36:24.667800931Z" level=info msg="ignoring event" container=f9d910b0983cb0892649fb461d813d24cc20dc9cb52f1c34bffc24974c6254b8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 00:36:24 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:36:24.702452207Z" level=info msg="ignoring event" container=e8bdb4e95a016eadb24cec73d7616b04ee14ded38e15c052c262924f6ff2c3f3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 00:36:24 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:36:24.730696688Z" level=info msg="ignoring event" container=51cfceedd8f38a0b8805c2b152a2b4ceb44c44bc4f841554b61d639778fa2e7e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 00:36:24 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:36:24.734685755Z" level=info msg="ignoring event" container=86c4bb5905f038ee10f18ea4c0a0f84c326c6ea5796136c2d15ede9dd76ddb1a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 00:36:24 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:36:24.734737890Z" level=info msg="ignoring event" container=8792838a023687918432d5cd4481d77e3a3a5aa2bc5d625c3ad888d404992056 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 00:36:24 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:36:24.744738334Z" level=info msg="ignoring event" container=e6aa1f5da520651d0e9074502bf431431654525850070360b5a6cec6e5cc3a7d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 00:36:24 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:36:24.792355864Z" level=info msg="ignoring event" container=55d09413420523886777e2a33350e9d4e2796075c925c7ca4fad0c8d88dc5292 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 00:36:24 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:36:24.792407917Z" level=info msg="ignoring event" container=ee41f2c71eaeff54b319b8ecad4c92cd483b28a2de172b514c5fe1c599907ebc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 00:36:24 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:36:24.916587968Z" level=info msg="ignoring event" container=6c7f1ea7004a53213fe09d61d02be07c1441535233e571eabe8ac6e6393a15d5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 00:36:24 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:36:24.916627073Z" level=info msg="ignoring event" container=bc4b5104a1fa804c2972e4d2328a98ff27565054f5f3e5527256db12dbe5f8c3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 00:36:24 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:36:24.943898004Z" level=info msg="ignoring event" container=b83b8d797e576d4c4a40079f2b8f2a35f882f80097441865260de54d6b2e777c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 00:36:24 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:36:24.950340614Z" level=info msg="ignoring event" container=8e312aa6ef3e19901d6ba1c041e04294bb4ca23b0ce733693ccf4b514aef34a5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 00:36:31 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:36:31.408403304Z" level=info msg="ignoring event" container=e8470704d7993424351e09aaa4b5410979a3540340237e9abc4ed51e584cc32a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 00:36:31 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:36:31.454875690Z" level=info msg="ignoring event" container=29e27695dc8681ce705a3e89313782a1f6d20f62410e6d165c999f11d3658231 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 00:36:31 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:36:31.538200309Z" level=info msg="ignoring event" container=a188b55eb23008df690d2557e3c7fc1864d649d86f6aa3769f056ac1a4acb72c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 00:36:31 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:36:31.573223931Z" level=info msg="ignoring event" container=9abc8197d88f7c30fcccc05d3dc8f2e194182bd3706086590a6ffa50b864e307 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 00:36:37 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:36:37.029128191Z" level=info msg="ignoring event" container=849a40fa43175062f2b2815699169f99f4881ff190fc57d6ace0e2888440ade6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 00:36:37 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:36:37.136097746Z" level=info msg="ignoring event" container=f7efddac8003547b6c26d342b74695955977360c8b76f80483fc604602587e48 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 00:36:58 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:36:58.909308958Z" level=info msg="ignoring event" container=383628dc34c7fd40f43dea1044fcd617a27edb9ba51be757aa2308b30d8f3360 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 00:36:58 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:36:58.972662290Z" level=info msg="ignoring event" container=87f9e6f6e10e0eb5cdaba072c4a18108c73d2d8467fd6f0ebd9757ebe6fc4737 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 00:37:52 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:37:52.726819175Z" level=info msg="ignoring event" container=4f8318fb427bb7febbf764be3c635bbfdbab0b6d09270d0f23c6ea785d4b34ff module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 00:37:59 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:37:59.741431803Z" level=info msg="ignoring event" container=2393d8ddbbc667301cfdbf2e48a0f3c00416d736e78b9ab413ff13b0775be9e5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 00:42:53 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:42:53.750574009Z" level=info msg="ignoring event" container=a09cb7b6d1a17a72b1ec95c48465a7db5768d9e6f1550d8c8a61220a08ae4029 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 00:43:10 addons-20210811003021-1387367 dockerd[458]: time="2021-08-11T00:43:10.746376443Z" level=info msg="ignoring event" container=324e29b7274d5e7d9648497a283b01a592576b081b68a06ecbdaac2b129aa551 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                             CREATED              STATE               NAME                      ATTEMPT             POD ID
	324e29b7274d5       d544402579747                                                                     About a minute ago   Exited              olm-operator              7                   239321d9715a9
	a09cb7b6d1a17       d544402579747                                                                     About a minute ago   Exited              catalog-operator          7                   dc27d55e9b2e5
	c53c8cecd76a3       nginx@sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce     8 minutes ago        Running             nginx                     0                   bb18f79979388
	3b95079c4f6ad       busybox@sha256:bda689514be526d9557ad442312e5d541757c453c50b8cf2ae68597c291385a1   9 minutes ago        Running             busybox                   0                   5872a214a1ca7
	0d6ae04912a61       ba04bb24b9575                                                                     13 minutes ago       Running             storage-provisioner       0                   c19163d31a596
	4f14ad2dc9238       1a1f05a2cd7c2                                                                     13 minutes ago       Running             coredns                   0                   f3126492d7db3
	3e17f7de9e8a2       4ea38350a1beb                                                                     13 minutes ago       Running             kube-proxy                0                   4393665d45427
	178036f64854a       cb310ff289d79                                                                     13 minutes ago       Running             kube-controller-manager   0                   7e5d403628742
	daa4bc492ed71       05b738aa1bc63                                                                     13 minutes ago       Running             etcd                      0                   5c7734c8acc19
	107ea2d3d596b       44a6d50ef170d                                                                     13 minutes ago       Running             kube-apiserver            0                   efd4677540c6b
	4f7326edc3cff       31a3b96cefc1e                                                                     13 minutes ago       Running             kube-scheduler            0                   367240f7e40a9
	
	* 
	* ==> coredns [4f14ad2dc923] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.0
	linux/arm64, go1.15.3, 054c9ae
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               addons-20210811003021-1387367
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-20210811003021-1387367
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=877a5691753f15214a0c269ac69dcdc5a4d99fcd
	                    minikube.k8s.io/name=addons-20210811003021-1387367
	                    minikube.k8s.io/updated_at=2021_08_11T00_31_06_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-20210811003021-1387367
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 11 Aug 2021 00:31:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-20210811003021-1387367
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 11 Aug 2021 00:44:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 11 Aug 2021 00:41:45 +0000   Wed, 11 Aug 2021 00:30:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 11 Aug 2021 00:41:45 +0000   Wed, 11 Aug 2021 00:30:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 11 Aug 2021 00:41:45 +0000   Wed, 11 Aug 2021 00:30:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 11 Aug 2021 00:41:45 +0000   Wed, 11 Aug 2021 00:31:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-20210811003021-1387367
	Capacity:
	  cpu:                2
	  ephemeral-storage:  60796312Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8033460Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  60796312Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8033460Ki
	  pods:               110
	System Info:
	  Machine ID:                 80c525a0c99c4bf099c0cbf9c365b032
	  System UUID:                7597b455-7869-476e-86a2-9b994506f601
	  Boot ID:                    dff2c102-a0cf-4fb0-a2ea-36617f3a3229
	  Kernel Version:             5.8.0-1041-aws
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.7
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m18s
	  default                     nginx                                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m15s
	  kube-system                 coredns-558bd4d5db-j4xjh                                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     13m
	  kube-system                 etcd-addons-20210811003021-1387367                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kube-apiserver-addons-20210811003021-1387367             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-addons-20210811003021-1387367    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-hbv8p                                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-addons-20210811003021-1387367             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 storage-provisioner                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  olm                         catalog-operator-75d496484d-lftth                        10m (0%!)(MISSING)      0 (0%!)(MISSING)      80Mi (1%!)(MISSING)        0 (0%!)(MISSING)         13m
	  olm                         olm-operator-859c88c96-zfpv9                             10m (0%!)(MISSING)      0 (0%!)(MISSING)      160Mi (2%!)(MISSING)       0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                770m (38%!)(MISSING)  0 (0%!)(MISSING)
	  memory             410Mi (5%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  NodeHasSufficientMemory  13m (x5 over 13m)  kubelet     Node addons-20210811003021-1387367 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x4 over 13m)  kubelet     Node addons-20210811003021-1387367 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x4 over 13m)  kubelet     Node addons-20210811003021-1387367 status is now: NodeHasSufficientPID
	  Normal  Starting                 13m                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m                kubelet     Node addons-20210811003021-1387367 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                kubelet     Node addons-20210811003021-1387367 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m                kubelet     Node addons-20210811003021-1387367 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             13m                kubelet     Node addons-20210811003021-1387367 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  13m                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                13m                kubelet     Node addons-20210811003021-1387367 status is now: NodeReady
	  Normal  Starting                 13m                kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.001104] FS-Cache: O-key=[8] 'c762010000000000'
	[  +0.000863] FS-Cache: N-cookie c=000000006895995f [p=000000003cfe13d3 fl=2 nc=0 na=1]
	[  +0.001353] FS-Cache: N-cookie d=00000000d0f41ca1 n=0000000007d05ee7
	[  +0.001085] FS-Cache: N-key=[8] 'c762010000000000'
	[Aug10 23:20] FS-Cache: Duplicate cookie detected
	[  +0.000856] FS-Cache: O-cookie c=00000000af756993 [p=000000003cfe13d3 fl=226 nc=0 na=1]
	[  +0.001346] FS-Cache: O-cookie d=00000000d0f41ca1 n=000000009356b987
	[  +0.001071] FS-Cache: O-key=[8] 'c562010000000000'
	[  +0.000838] FS-Cache: N-cookie c=0000000062b369eb [p=000000003cfe13d3 fl=2 nc=0 na=1]
	[  +0.001331] FS-Cache: N-cookie d=00000000d0f41ca1 n=00000000e0c82591
	[  +0.001061] FS-Cache: N-key=[8] 'c562010000000000'
	[  +0.001531] FS-Cache: Duplicate cookie detected
	[  +0.000801] FS-Cache: O-cookie c=00000000ccb09f62 [p=000000003cfe13d3 fl=226 nc=0 na=1]
	[  +0.001326] FS-Cache: O-cookie d=00000000d0f41ca1 n=000000001c672d8a
	[  +0.001069] FS-Cache: O-key=[8] 'c762010000000000'
	[  +0.001140] FS-Cache: N-cookie c=0000000062b369eb [p=000000003cfe13d3 fl=2 nc=0 na=1]
	[  +0.001307] FS-Cache: N-cookie d=00000000d0f41ca1 n=0000000083a2ea2e
	[  +0.001068] FS-Cache: N-key=[8] 'c762010000000000'
	[  +0.001828] FS-Cache: Duplicate cookie detected
	[  +0.000775] FS-Cache: O-cookie c=0000000089195cf5 [p=000000003cfe13d3 fl=226 nc=0 na=1]
	[  +0.001346] FS-Cache: O-cookie d=00000000d0f41ca1 n=0000000024759c93
	[  +0.001076] FS-Cache: O-key=[8] 'c662010000000000'
	[  +0.000853] FS-Cache: N-cookie c=0000000062b369eb [p=000000003cfe13d3 fl=2 nc=0 na=1]
	[  +0.001320] FS-Cache: N-cookie d=00000000d0f41ca1 n=00000000f79fca59
	[  +0.001058] FS-Cache: N-key=[8] 'c662010000000000'
	
	* 
	* ==> etcd [daa4bc492ed7] <==
	* 2021-08-11 00:40:58.957782 I | mvcc: finished scheduled compaction at 1614 (took 26.006937ms)
	2021-08-11 00:41:00.566433 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 00:41:10.566162 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 00:41:20.566184 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 00:41:30.565686 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 00:41:40.566656 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 00:41:50.566590 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 00:42:00.565982 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 00:42:10.566077 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 00:42:20.566638 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 00:42:30.566573 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 00:42:40.565910 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 00:42:50.566662 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 00:43:00.566814 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 00:43:10.565972 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 00:43:20.566655 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 00:43:30.565815 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 00:43:40.566101 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 00:43:50.566088 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 00:44:00.565790 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 00:44:10.566140 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 00:44:20.565805 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 00:44:30.565751 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 00:44:40.565952 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 00:44:50.566578 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  00:44:53 up 10:27,  0 users,  load average: 0.20, 0.57, 1.57
	Linux addons-20210811003021-1387367 5.8.0-1041-aws #43~20.04.1-Ubuntu SMP Thu Jul 15 11:03:27 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [107ea2d3d596] <==
	* I0811 00:39:34.336618       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0811 00:40:13.986986       1 client.go:360] parsed scheme: "passthrough"
	I0811 00:40:13.987031       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0811 00:40:13.987040       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0811 00:40:56.958444       1 client.go:360] parsed scheme: "passthrough"
	I0811 00:40:56.958489       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0811 00:40:56.958499       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0811 00:41:31.232435       1 client.go:360] parsed scheme: "passthrough"
	I0811 00:41:31.232481       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0811 00:41:31.232490       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0811 00:42:14.513500       1 client.go:360] parsed scheme: "passthrough"
	I0811 00:42:14.513550       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0811 00:42:14.513559       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0811 00:42:49.025796       1 client.go:360] parsed scheme: "passthrough"
	I0811 00:42:49.025844       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0811 00:42:49.025877       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0811 00:43:21.852622       1 client.go:360] parsed scheme: "passthrough"
	I0811 00:43:21.852668       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0811 00:43:21.852677       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0811 00:44:01.152579       1 client.go:360] parsed scheme: "passthrough"
	I0811 00:44:01.152644       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0811 00:44:01.152659       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0811 00:44:40.093105       1 client.go:360] parsed scheme: "passthrough"
	I0811 00:44:40.093153       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0811 00:44:40.093161       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-controller-manager [178036f64854] <==
	* E0811 00:38:00.792803       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0811 00:38:40.670706       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0811 00:38:41.687186       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0811 00:38:56.642220       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0811 00:39:30.511139       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0811 00:39:30.551359       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0811 00:39:49.780751       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0811 00:40:02.207804       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0811 00:40:07.079413       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0811 00:40:39.810018       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0811 00:40:52.008992       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0811 00:40:54.826673       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0811 00:41:25.846523       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0811 00:41:42.150950       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0811 00:41:45.092512       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0811 00:42:22.780704       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0811 00:42:27.032907       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0811 00:42:33.248093       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0811 00:43:14.108976       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0811 00:43:18.158674       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0811 00:43:30.409544       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0811 00:43:55.954905       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0811 00:44:07.009923       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0811 00:44:28.236295       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0811 00:44:29.656953       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [3e17f7de9e8a] <==
	* I0811 00:31:21.677244       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0811 00:31:21.677330       1 server_others.go:140] Detected node IP 192.168.49.2
	W0811 00:31:21.677372       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0811 00:31:21.789613       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0811 00:31:21.789651       1 server_others.go:212] Using iptables Proxier.
	I0811 00:31:21.789661       1 server_others.go:219] creating dualStackProxier for iptables.
	W0811 00:31:21.789673       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0811 00:31:21.789947       1 server.go:643] Version: v1.21.3
	I0811 00:31:21.855080       1 config.go:315] Starting service config controller
	I0811 00:31:21.855098       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0811 00:31:21.855222       1 config.go:224] Starting endpoint slice config controller
	I0811 00:31:21.855227       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0811 00:31:21.868516       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0811 00:31:21.870560       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0811 00:31:21.955804       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0811 00:31:21.955864       1 shared_informer.go:247] Caches are synced for service config 
	W0811 00:40:28.872840       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	
	* 
	* ==> kube-scheduler [4f7326edc3cf] <==
	* W0811 00:31:03.888717       1 authentication.go:338] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0811 00:31:03.888737       1 authentication.go:339] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0811 00:31:04.007158       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0811 00:31:04.010685       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0811 00:31:04.010727       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0811 00:31:04.022524       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0811 00:31:04.023548       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0811 00:31:04.024389       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0811 00:31:04.024468       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0811 00:31:04.024200       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0811 00:31:04.024270       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0811 00:31:04.024328       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0811 00:31:04.024586       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0811 00:31:04.024664       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0811 00:31:04.024722       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0811 00:31:04.024777       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0811 00:31:04.024827       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0811 00:31:04.024886       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0811 00:31:04.025054       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0811 00:31:04.025179       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0811 00:31:04.873178       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0811 00:31:04.947036       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0811 00:31:04.986978       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0811 00:31:05.021088       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0811 00:31:07.122862       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2021-08-11 00:30:27 UTC, end at Wed 2021-08-11 00:44:53 UTC. --
	Aug 11 00:43:29 addons-20210811003021-1387367 kubelet[2321]: E0811 00:43:29.573659    2321 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"catalog-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=catalog-operator pod=catalog-operator-75d496484d-lftth_olm(d8dfd948-ff17-4767-8371-4be73646cb5d)\"" pod="olm/catalog-operator-75d496484d-lftth" podUID=d8dfd948-ff17-4767-8371-4be73646cb5d
	Aug 11 00:43:39 addons-20210811003021-1387367 kubelet[2321]: I0811 00:43:39.569935    2321 scope.go:111] "RemoveContainer" containerID="324e29b7274d5e7d9648497a283b01a592576b081b68a06ecbdaac2b129aa551"
	Aug 11 00:43:39 addons-20210811003021-1387367 kubelet[2321]: E0811 00:43:39.570345    2321 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"olm-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=olm-operator pod=olm-operator-859c88c96-zfpv9_olm(2866bd0c-37ae-465c-915f-d324574f23f7)\"" pod="olm/olm-operator-859c88c96-zfpv9" podUID=2866bd0c-37ae-465c-915f-d324574f23f7
	Aug 11 00:43:41 addons-20210811003021-1387367 kubelet[2321]: I0811 00:43:41.569548    2321 scope.go:111] "RemoveContainer" containerID="a09cb7b6d1a17a72b1ec95c48465a7db5768d9e6f1550d8c8a61220a08ae4029"
	Aug 11 00:43:41 addons-20210811003021-1387367 kubelet[2321]: E0811 00:43:41.570328    2321 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"catalog-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=catalog-operator pod=catalog-operator-75d496484d-lftth_olm(d8dfd948-ff17-4767-8371-4be73646cb5d)\"" pod="olm/catalog-operator-75d496484d-lftth" podUID=d8dfd948-ff17-4767-8371-4be73646cb5d
	Aug 11 00:43:52 addons-20210811003021-1387367 kubelet[2321]: I0811 00:43:52.569370    2321 scope.go:111] "RemoveContainer" containerID="324e29b7274d5e7d9648497a283b01a592576b081b68a06ecbdaac2b129aa551"
	Aug 11 00:43:52 addons-20210811003021-1387367 kubelet[2321]: E0811 00:43:52.572600    2321 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"olm-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=olm-operator pod=olm-operator-859c88c96-zfpv9_olm(2866bd0c-37ae-465c-915f-d324574f23f7)\"" pod="olm/olm-operator-859c88c96-zfpv9" podUID=2866bd0c-37ae-465c-915f-d324574f23f7
	Aug 11 00:43:54 addons-20210811003021-1387367 kubelet[2321]: I0811 00:43:54.572958    2321 scope.go:111] "RemoveContainer" containerID="a09cb7b6d1a17a72b1ec95c48465a7db5768d9e6f1550d8c8a61220a08ae4029"
	Aug 11 00:43:54 addons-20210811003021-1387367 kubelet[2321]: E0811 00:43:54.573692    2321 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"catalog-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=catalog-operator pod=catalog-operator-75d496484d-lftth_olm(d8dfd948-ff17-4767-8371-4be73646cb5d)\"" pod="olm/catalog-operator-75d496484d-lftth" podUID=d8dfd948-ff17-4767-8371-4be73646cb5d
	Aug 11 00:44:06 addons-20210811003021-1387367 kubelet[2321]: I0811 00:44:06.569585    2321 scope.go:111] "RemoveContainer" containerID="324e29b7274d5e7d9648497a283b01a592576b081b68a06ecbdaac2b129aa551"
	Aug 11 00:44:06 addons-20210811003021-1387367 kubelet[2321]: E0811 00:44:06.570422    2321 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"olm-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=olm-operator pod=olm-operator-859c88c96-zfpv9_olm(2866bd0c-37ae-465c-915f-d324574f23f7)\"" pod="olm/olm-operator-859c88c96-zfpv9" podUID=2866bd0c-37ae-465c-915f-d324574f23f7
	Aug 11 00:44:06 addons-20210811003021-1387367 kubelet[2321]: I0811 00:44:06.571044    2321 scope.go:111] "RemoveContainer" containerID="a09cb7b6d1a17a72b1ec95c48465a7db5768d9e6f1550d8c8a61220a08ae4029"
	Aug 11 00:44:06 addons-20210811003021-1387367 kubelet[2321]: E0811 00:44:06.571479    2321 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"catalog-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=catalog-operator pod=catalog-operator-75d496484d-lftth_olm(d8dfd948-ff17-4767-8371-4be73646cb5d)\"" pod="olm/catalog-operator-75d496484d-lftth" podUID=d8dfd948-ff17-4767-8371-4be73646cb5d
	Aug 11 00:44:17 addons-20210811003021-1387367 kubelet[2321]: I0811 00:44:17.569831    2321 scope.go:111] "RemoveContainer" containerID="a09cb7b6d1a17a72b1ec95c48465a7db5768d9e6f1550d8c8a61220a08ae4029"
	Aug 11 00:44:17 addons-20210811003021-1387367 kubelet[2321]: E0811 00:44:17.570266    2321 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"catalog-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=catalog-operator pod=catalog-operator-75d496484d-lftth_olm(d8dfd948-ff17-4767-8371-4be73646cb5d)\"" pod="olm/catalog-operator-75d496484d-lftth" podUID=d8dfd948-ff17-4767-8371-4be73646cb5d
	Aug 11 00:44:18 addons-20210811003021-1387367 kubelet[2321]: I0811 00:44:18.569519    2321 scope.go:111] "RemoveContainer" containerID="324e29b7274d5e7d9648497a283b01a592576b081b68a06ecbdaac2b129aa551"
	Aug 11 00:44:18 addons-20210811003021-1387367 kubelet[2321]: E0811 00:44:18.570056    2321 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"olm-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=olm-operator pod=olm-operator-859c88c96-zfpv9_olm(2866bd0c-37ae-465c-915f-d324574f23f7)\"" pod="olm/olm-operator-859c88c96-zfpv9" podUID=2866bd0c-37ae-465c-915f-d324574f23f7
	Aug 11 00:44:30 addons-20210811003021-1387367 kubelet[2321]: I0811 00:44:30.570521    2321 scope.go:111] "RemoveContainer" containerID="324e29b7274d5e7d9648497a283b01a592576b081b68a06ecbdaac2b129aa551"
	Aug 11 00:44:30 addons-20210811003021-1387367 kubelet[2321]: E0811 00:44:30.570876    2321 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"olm-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=olm-operator pod=olm-operator-859c88c96-zfpv9_olm(2866bd0c-37ae-465c-915f-d324574f23f7)\"" pod="olm/olm-operator-859c88c96-zfpv9" podUID=2866bd0c-37ae-465c-915f-d324574f23f7
	Aug 11 00:44:31 addons-20210811003021-1387367 kubelet[2321]: I0811 00:44:31.569789    2321 scope.go:111] "RemoveContainer" containerID="a09cb7b6d1a17a72b1ec95c48465a7db5768d9e6f1550d8c8a61220a08ae4029"
	Aug 11 00:44:31 addons-20210811003021-1387367 kubelet[2321]: E0811 00:44:31.570193    2321 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"catalog-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=catalog-operator pod=catalog-operator-75d496484d-lftth_olm(d8dfd948-ff17-4767-8371-4be73646cb5d)\"" pod="olm/catalog-operator-75d496484d-lftth" podUID=d8dfd948-ff17-4767-8371-4be73646cb5d
	Aug 11 00:44:42 addons-20210811003021-1387367 kubelet[2321]: I0811 00:44:42.569771    2321 scope.go:111] "RemoveContainer" containerID="324e29b7274d5e7d9648497a283b01a592576b081b68a06ecbdaac2b129aa551"
	Aug 11 00:44:42 addons-20210811003021-1387367 kubelet[2321]: E0811 00:44:42.570186    2321 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"olm-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=olm-operator pod=olm-operator-859c88c96-zfpv9_olm(2866bd0c-37ae-465c-915f-d324574f23f7)\"" pod="olm/olm-operator-859c88c96-zfpv9" podUID=2866bd0c-37ae-465c-915f-d324574f23f7
	Aug 11 00:44:43 addons-20210811003021-1387367 kubelet[2321]: I0811 00:44:43.569560    2321 scope.go:111] "RemoveContainer" containerID="a09cb7b6d1a17a72b1ec95c48465a7db5768d9e6f1550d8c8a61220a08ae4029"
	Aug 11 00:44:43 addons-20210811003021-1387367 kubelet[2321]: E0811 00:44:43.569982    2321 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"catalog-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=catalog-operator pod=catalog-operator-75d496484d-lftth_olm(d8dfd948-ff17-4767-8371-4be73646cb5d)\"" pod="olm/catalog-operator-75d496484d-lftth" podUID=d8dfd948-ff17-4767-8371-4be73646cb5d
	
	* 
	* ==> storage-provisioner [0d6ae04912a6] <==
	* I0811 00:31:24.716594       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0811 00:31:24.745311       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0811 00:31:24.745363       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0811 00:31:24.768040       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0811 00:31:24.768216       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-20210811003021-1387367_ae097080-2d56-4c92-b0f7-bfd9c649e5f6!
	I0811 00:31:24.771936       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"70e765f0-f18d-4a79-9f04-05826884f687", APIVersion:"v1", ResourceVersion:"493", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-20210811003021-1387367_ae097080-2d56-4c92-b0f7-bfd9c649e5f6 became leader
	I0811 00:31:24.968818       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-20210811003021-1387367_ae097080-2d56-4c92-b0f7-bfd9c649e5f6!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-20210811003021-1387367 -n addons-20210811003021-1387367
helpers_test.go:262: (dbg) Run:  kubectl --context addons-20210811003021-1387367 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:268: non-running pods: 
helpers_test.go:270: ======> post-mortem[TestAddons/parallel/Olm]: describe non-running pods <======
helpers_test.go:273: (dbg) Run:  kubectl --context addons-20210811003021-1387367 describe pod 
helpers_test.go:273: (dbg) Non-zero exit: kubectl --context addons-20210811003021-1387367 describe pod : exit status 1 (63.000657ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:275: kubectl --context addons-20210811003021-1387367 describe pod : exit status 1
--- FAIL: TestAddons/parallel/Olm (733.12s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (605.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:462: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-20210811005307-1387367 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:467: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-20210811005307-1387367 -- rollout status deployment/busybox
E0811 00:55:48.653300 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210811004603-1387367/client.crt: no such file or directory
E0811 00:57:41.079803 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/client.crt: no such file or directory
E0811 00:58:04.809183 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210811004603-1387367/client.crt: no such file or directory
E0811 00:58:32.493523 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210811004603-1387367/client.crt: no such file or directory
E0811 01:02:41.079824 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/client.crt: no such file or directory
E0811 01:03:04.809827 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210811004603-1387367/client.crt: no such file or directory
E0811 01:04:04.125344 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/client.crt: no such file or directory
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-20210811005307-1387367 -- rollout status deployment/busybox: exit status 1 (10m0.059607386s)

                                                
                                                
-- stdout --
	Waiting for deployment "busybox" rollout to finish: 0 of 2 updated replicas are available...

                                                
                                                
-- /stdout --
** stderr ** 
	error: deployment "busybox" exceeded its progress deadline

                                                
                                                
** /stderr **
multinode_test.go:469: failed to deploy busybox to multinode cluster
multinode_test.go:473: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-20210811005307-1387367 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:485: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-20210811005307-1387367 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-20210811005307-1387367 -- exec busybox-84b6686758-2jxsd -- nslookup kubernetes.io
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-20210811005307-1387367 -- exec busybox-84b6686758-2jxsd -- nslookup kubernetes.io: exit status 1 (216.326726ms)

                                                
                                                
** stderr ** 
	error: unable to upgrade connection: container not found ("busybox")

                                                
                                                
** /stderr **
multinode_test.go:495: Pod busybox-84b6686758-2jxsd could not resolve 'kubernetes.io': exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-20210811005307-1387367 -- exec busybox-84b6686758-c9mqs -- nslookup kubernetes.io
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-20210811005307-1387367 -- exec busybox-84b6686758-c9mqs -- nslookup kubernetes.io: exit status 1 (183.886349ms)

                                                
                                                
** stderr ** 
	error: unable to upgrade connection: container not found ("busybox")

                                                
                                                
** /stderr **
multinode_test.go:495: Pod busybox-84b6686758-c9mqs could not resolve 'kubernetes.io': exit status 1
multinode_test.go:503: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-20210811005307-1387367 -- exec busybox-84b6686758-2jxsd -- nslookup kubernetes.default
multinode_test.go:503: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-20210811005307-1387367 -- exec busybox-84b6686758-2jxsd -- nslookup kubernetes.default: exit status 1 (183.978804ms)

                                                
                                                
** stderr ** 
	error: unable to upgrade connection: container not found ("busybox")

                                                
                                                
** /stderr **
multinode_test.go:505: Pod busybox-84b6686758-2jxsd could not resolve 'kubernetes.default': exit status 1
multinode_test.go:503: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-20210811005307-1387367 -- exec busybox-84b6686758-c9mqs -- nslookup kubernetes.default
multinode_test.go:503: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-20210811005307-1387367 -- exec busybox-84b6686758-c9mqs -- nslookup kubernetes.default: exit status 1 (188.872204ms)

                                                
                                                
** stderr ** 
	error: unable to upgrade connection: container not found ("busybox")

                                                
                                                
** /stderr **
multinode_test.go:505: Pod busybox-84b6686758-c9mqs could not resolve 'kubernetes.default': exit status 1
multinode_test.go:511: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-20210811005307-1387367 -- exec busybox-84b6686758-2jxsd -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:511: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-20210811005307-1387367 -- exec busybox-84b6686758-2jxsd -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (184.415363ms)

                                                
                                                
** stderr ** 
	error: unable to upgrade connection: container not found ("busybox")

                                                
                                                
** /stderr **
multinode_test.go:513: Pod busybox-84b6686758-2jxsd could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
multinode_test.go:511: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-20210811005307-1387367 -- exec busybox-84b6686758-c9mqs -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:511: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-20210811005307-1387367 -- exec busybox-84b6686758-c9mqs -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (182.104125ms)

                                                
                                                
** stderr ** 
	error: unable to upgrade connection: container not found ("busybox")

                                                
                                                
** /stderr **
multinode_test.go:513: Pod busybox-84b6686758-c9mqs could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestMultiNode/serial/DeployApp2Nodes]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect multinode-20210811005307-1387367
helpers_test.go:236: (dbg) docker inspect multinode-20210811005307-1387367:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "549bdb3bf1ad8cdc8e3637fd16790e323f3f10545c1a724499bbf68b3777220a",
	        "Created": "2021-08-11T00:53:08.554271158Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1439761,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-11T00:53:09.047827202Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ba5ae658d5b3f017bdb597cc46a1912d5eed54239e31b777788d204fdcbc4445",
	        "ResolvConfPath": "/var/lib/docker/containers/549bdb3bf1ad8cdc8e3637fd16790e323f3f10545c1a724499bbf68b3777220a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/549bdb3bf1ad8cdc8e3637fd16790e323f3f10545c1a724499bbf68b3777220a/hostname",
	        "HostsPath": "/var/lib/docker/containers/549bdb3bf1ad8cdc8e3637fd16790e323f3f10545c1a724499bbf68b3777220a/hosts",
	        "LogPath": "/var/lib/docker/containers/549bdb3bf1ad8cdc8e3637fd16790e323f3f10545c1a724499bbf68b3777220a/549bdb3bf1ad8cdc8e3637fd16790e323f3f10545c1a724499bbf68b3777220a-json.log",
	        "Name": "/multinode-20210811005307-1387367",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-20210811005307-1387367:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "multinode-20210811005307-1387367",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/21e583d79e3b146292577b4d05f8d8526f1323507981f139d59a588539c6191b-init/diff:/var/lib/docker/overlay2/b901673749d4c23cf617379d66c43acbc184f898f580a05fca5568725e6ccb6a/diff:/var/lib/docker/overlay2/3fd19ee2c9d46b2cdb8a592d42d57d9efdba3a556c98f5018ae07caa15606bc4/diff:/var/lib/docker/overlay2/31f547e426e6dfa6ed65e0b7cb851c18e771f23a77868552685aacb2e126dc0a/diff:/var/lib/docker/overlay2/6ae53b304b800757235653c63c7879ae7f05b4d4f0400f7f6fadc53e2059aa5a/diff:/var/lib/docker/overlay2/7702d6ed068e8b454dd11af18cb8cb76986898926e3e3130c2d7f638062de9ee/diff:/var/lib/docker/overlay2/e67b0ce82f4d6c092698530106fa38495aa54b2fe5600ac022386a3d17165948/diff:/var/lib/docker/overlay2/d3ddbdbbe88f3c5a0867637eeb78a22790daa833a6179cdd4690044007911336/diff:/var/lib/docker/overlay2/10c48536a5187dfe63f1c090ec32daef76e852de7cc4a7e7f96a2fa1510314cc/diff:/var/lib/docker/overlay2/2186c26bc131feb045ca64a28e2cc431fed76b32afc3d3587916b98a9af807fe/diff:/var/lib/docker/overlay2/292c9d
aaf6d60ee235c7ac65bfc1b61b9c0d360ebbebcf08ba5efeb1b40de075/diff:/var/lib/docker/overlay2/9bc521e84afeeb62fa312e9eb2afc367bc449dbf66f412e17eb2338f79d6f920/diff:/var/lib/docker/overlay2/b1a93cf97438f068af56026fc52aaa329c46e4cac3d8f91c8d692871adaf451a/diff:/var/lib/docker/overlay2/b8e42d5d9e69e72a11e3cad660b9f29335dfc6cd1b4a6aebdbf5e6f313efe749/diff:/var/lib/docker/overlay2/6a6eaef3ce06d941ce606aaebc530878ce54d24a51c7947ca936a3a6eb4dac16/diff:/var/lib/docker/overlay2/62370bd2a6e35ce796647f79ccf9906147c91e8ceee31e401bdb7842371c6bee/diff:/var/lib/docker/overlay2/e673dacc1c6815100340b85af47aeb90eb5fca87778caec1d728de5b8cc9a36e/diff:/var/lib/docker/overlay2/bd17ea1d8cd8e2f88bd7fb4cee8a097365f6b81efc91f203a0504873fc0916a6/diff:/var/lib/docker/overlay2/d2f15007a2a5c037903647e5dd0d6882903fa163d23087bbd8eadeaf3618377b/diff:/var/lib/docker/overlay2/0bbc7fe1b1d62a2db9b4f402e6bc8781815951ae6df608307fd50a2fde242253/diff:/var/lib/docker/overlay2/d124fa0a0ea67ad0362eec0adf1f3e7cbd885b2cf4c31f83e917d97a09a791af/diff:/var/lib/d
ocker/overlay2/ee74e2f91490ecb544a95b306f1001046f3c4656413878d09be8bf67de7b4c4f/diff:/var/lib/docker/overlay2/4279b3790ea6aeb262c4ecd9cf4aae5beb1430f4fbb599b49ff27d0f7b3a9714/diff:/var/lib/docker/overlay2/b7fd6a0c88249dbf5e233463fbe08559ca287465617e7721977a002204ea3af5/diff:/var/lib/docker/overlay2/c495a83eeda1cf6df33d49341ee01f15738845e6330c0a5b3c29e11fdc4733b0/diff:/var/lib/docker/overlay2/ac747f0260d49943953568bbbe150f3a4f28d70bd82f40d0485ef13b12195044/diff:/var/lib/docker/overlay2/aa98d62ac831ecd60bc1acfa1708c0648c306bb7fa187026b472e9ae5c3364a4/diff:/var/lib/docker/overlay2/34829b132a53df856a1be03aa46565640e20cb075db18bd9775a5055fe0c0b22/diff:/var/lib/docker/overlay2/85a074fe6f79f3ea9d8b2f628355f41bb4f73b398257f8b6659bc171d86a0736/diff:/var/lib/docker/overlay2/c8c145d2e68e655880cd5c8fae8cb9f7cbd6b112f1f64fced224b17d4f60fbc7/diff:/var/lib/docker/overlay2/7480ad16aa2479be3569dd07eca685bc3a37a785e7ff281c448c7ca718cc67c3/diff:/var/lib/docker/overlay2/519f1304b1b8ee2daf8c1b9411f3e46d4fedacc8d6446937321372c4e8d
f2cb9/diff:/var/lib/docker/overlay2/246fcb20bef1dbfdc41186d1b7143566cd571a067830cc3f946b232024c2e85c/diff:/var/lib/docker/overlay2/f5f15e6d497abc56d9a2d901ed821a56e6f3effe2fc8d6c3ef64297faea15179/diff:/var/lib/docker/overlay2/3aa1fb1105e860c53ef63317f6757f9629a4a20f35764d976df2b0f0cee5d4f2/diff:/var/lib/docker/overlay2/765f7cba41acbb266d2cef89f2a76a5659b78c3b075223bf23257ac44acfe177/diff:/var/lib/docker/overlay2/53179410fe05d9ddea0a22ba2c123ca8e75f9c7839c2a64902e411e2bda2de23/diff",
	                "MergedDir": "/var/lib/docker/overlay2/21e583d79e3b146292577b4d05f8d8526f1323507981f139d59a588539c6191b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/21e583d79e3b146292577b4d05f8d8526f1323507981f139d59a588539c6191b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/21e583d79e3b146292577b4d05f8d8526f1323507981f139d59a588539c6191b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-20210811005307-1387367",
	                "Source": "/var/lib/docker/volumes/multinode-20210811005307-1387367/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-20210811005307-1387367",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-20210811005307-1387367",
	                "name.minikube.sigs.k8s.io": "multinode-20210811005307-1387367",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e2f2f630f0d3756864343a7222d7c068ec558656959e55017181b93ce3089a53",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50285"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50284"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50281"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50283"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50282"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/e2f2f630f0d3",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-20210811005307-1387367": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "549bdb3bf1ad",
	                        "multinode-20210811005307-1387367"
	                    ],
	                    "NetworkID": "895f73080075bf95fc7bbf77ee83def6add633e6a908afc47428f4d25c69cb31",
	                    "EndpointID": "bca199b2a8b1e88644cd3d2f5b90ac6963d4c7d7de35a9a19e7c399d1b37a8b6",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p multinode-20210811005307-1387367 -n multinode-20210811005307-1387367
helpers_test.go:245: <<< TestMultiNode/serial/DeployApp2Nodes FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestMultiNode/serial/DeployApp2Nodes]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210811005307-1387367 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-20210811005307-1387367 logs -n 25: (1.65032863s)
helpers_test.go:253: TestMultiNode/serial/DeployApp2Nodes logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------------------------------|----------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                 Profile                  |   User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|------------------------------------------|----------|---------|-------------------------------|-------------------------------|
	| -p      | functional-20210811004603-1387367                 | functional-20210811004603-1387367        | jenkins  | v1.22.0 | Wed, 11 Aug 2021 00:48:42 UTC | Wed, 11 Aug 2021 00:48:43 UTC |
	|         | ssh sudo cat                                      |                                          |          |         |                               |                               |
	|         | /etc/ssl/certs/51391683.0                         |                                          |          |         |                               |                               |
	| -p      | functional-20210811004603-1387367                 | functional-20210811004603-1387367        | jenkins  | v1.22.0 | Wed, 11 Aug 2021 00:48:43 UTC | Wed, 11 Aug 2021 00:48:43 UTC |
	|         | ssh sudo cat                                      |                                          |          |         |                               |                               |
	|         | /etc/ssl/certs/13873672.pem                       |                                          |          |         |                               |                               |
	| -p      | functional-20210811004603-1387367                 | functional-20210811004603-1387367        | jenkins  | v1.22.0 | Wed, 11 Aug 2021 00:48:43 UTC | Wed, 11 Aug 2021 00:48:43 UTC |
	|         | ssh sudo cat                                      |                                          |          |         |                               |                               |
	|         | /usr/share/ca-certificates/13873672.pem           |                                          |          |         |                               |                               |
	| -p      | functional-20210811004603-1387367                 | functional-20210811004603-1387367        | jenkins  | v1.22.0 | Wed, 11 Aug 2021 00:48:42 UTC | Wed, 11 Aug 2021 00:48:43 UTC |
	|         | version -o=json --components                      |                                          |          |         |                               |                               |
	| -p      | functional-20210811004603-1387367                 | functional-20210811004603-1387367        | jenkins  | v1.22.0 | Wed, 11 Aug 2021 00:48:43 UTC | Wed, 11 Aug 2021 00:48:44 UTC |
	|         | update-context --alsologtostderr                  |                                          |          |         |                               |                               |
	|         | -v=2                                              |                                          |          |         |                               |                               |
	| -p      | functional-20210811004603-1387367                 | functional-20210811004603-1387367        | jenkins  | v1.22.0 | Wed, 11 Aug 2021 00:48:43 UTC | Wed, 11 Aug 2021 00:48:44 UTC |
	|         | ssh sudo cat                                      |                                          |          |         |                               |                               |
	|         | /etc/ssl/certs/3ec20f2e.0                         |                                          |          |         |                               |                               |
	| -p      | functional-20210811004603-1387367                 | functional-20210811004603-1387367        | jenkins  | v1.22.0 | Wed, 11 Aug 2021 00:48:44 UTC | Wed, 11 Aug 2021 00:48:44 UTC |
	|         | update-context --alsologtostderr                  |                                          |          |         |                               |                               |
	|         | -v=2                                              |                                          |          |         |                               |                               |
	| -p      | functional-20210811004603-1387367                 | functional-20210811004603-1387367        | jenkins  | v1.22.0 | Wed, 11 Aug 2021 00:48:44 UTC | Wed, 11 Aug 2021 00:48:44 UTC |
	|         | update-context --alsologtostderr                  |                                          |          |         |                               |                               |
	|         | -v=2                                              |                                          |          |         |                               |                               |
	| delete  | -p                                                | functional-20210811004603-1387367        | jenkins  | v1.22.0 | Wed, 11 Aug 2021 00:48:44 UTC | Wed, 11 Aug 2021 00:48:47 UTC |
	|         | functional-20210811004603-1387367                 |                                          |          |         |                               |                               |
	| start   | -p                                                | json-output-20210811004847-1387367       | testUser | v1.22.0 | Wed, 11 Aug 2021 00:48:47 UTC | Wed, 11 Aug 2021 00:50:31 UTC |
	|         | json-output-20210811004847-1387367                |                                          |          |         |                               |                               |
	|         | --output=json --user=testUser                     |                                          |          |         |                               |                               |
	|         | --memory=2200 --wait=true                         |                                          |          |         |                               |                               |
	|         | --driver=docker                                   |                                          |          |         |                               |                               |
	|         | --container-runtime=docker                        |                                          |          |         |                               |                               |
	| pause   | -p                                                | json-output-20210811004847-1387367       | testUser | v1.22.0 | Wed, 11 Aug 2021 00:50:31 UTC | Wed, 11 Aug 2021 00:50:32 UTC |
	|         | json-output-20210811004847-1387367                |                                          |          |         |                               |                               |
	|         | --output=json --user=testUser                     |                                          |          |         |                               |                               |
	| unpause | -p                                                | json-output-20210811004847-1387367       | testUser | v1.22.0 | Wed, 11 Aug 2021 00:50:32 UTC | Wed, 11 Aug 2021 00:50:32 UTC |
	|         | json-output-20210811004847-1387367                |                                          |          |         |                               |                               |
	|         | --output=json --user=testUser                     |                                          |          |         |                               |                               |
	| stop    | -p                                                | json-output-20210811004847-1387367       | testUser | v1.22.0 | Wed, 11 Aug 2021 00:50:32 UTC | Wed, 11 Aug 2021 00:50:43 UTC |
	|         | json-output-20210811004847-1387367                |                                          |          |         |                               |                               |
	|         | --output=json --user=testUser                     |                                          |          |         |                               |                               |
	| delete  | -p                                                | json-output-20210811004847-1387367       | jenkins  | v1.22.0 | Wed, 11 Aug 2021 00:50:43 UTC | Wed, 11 Aug 2021 00:50:45 UTC |
	|         | json-output-20210811004847-1387367                |                                          |          |         |                               |                               |
	| delete  | -p                                                | json-output-error-20210811005045-1387367 | jenkins  | v1.22.0 | Wed, 11 Aug 2021 00:50:45 UTC | Wed, 11 Aug 2021 00:50:46 UTC |
	|         | json-output-error-20210811005045-1387367          |                                          |          |         |                               |                               |
	| start   | -p                                                | docker-network-20210811005046-1387367    | jenkins  | v1.22.0 | Wed, 11 Aug 2021 00:50:46 UTC | Wed, 11 Aug 2021 00:51:28 UTC |
	|         | docker-network-20210811005046-1387367             |                                          |          |         |                               |                               |
	|         | --network=                                        |                                          |          |         |                               |                               |
	| delete  | -p                                                | docker-network-20210811005046-1387367    | jenkins  | v1.22.0 | Wed, 11 Aug 2021 00:51:28 UTC | Wed, 11 Aug 2021 00:51:30 UTC |
	|         | docker-network-20210811005046-1387367             |                                          |          |         |                               |                               |
	| start   | -p                                                | docker-network-20210811005130-1387367    | jenkins  | v1.22.0 | Wed, 11 Aug 2021 00:51:30 UTC | Wed, 11 Aug 2021 00:52:16 UTC |
	|         | docker-network-20210811005130-1387367             |                                          |          |         |                               |                               |
	|         | --network=bridge                                  |                                          |          |         |                               |                               |
	| delete  | -p                                                | docker-network-20210811005130-1387367    | jenkins  | v1.22.0 | Wed, 11 Aug 2021 00:52:16 UTC | Wed, 11 Aug 2021 00:52:19 UTC |
	|         | docker-network-20210811005130-1387367             |                                          |          |         |                               |                               |
	| start   | -p                                                | existing-network-20210811005219-1387367  | jenkins  | v1.22.0 | Wed, 11 Aug 2021 00:52:19 UTC | Wed, 11 Aug 2021 00:53:04 UTC |
	|         | existing-network-20210811005219-1387367           |                                          |          |         |                               |                               |
	|         | --network=existing-network                        |                                          |          |         |                               |                               |
	| delete  | -p                                                | existing-network-20210811005219-1387367  | jenkins  | v1.22.0 | Wed, 11 Aug 2021 00:53:04 UTC | Wed, 11 Aug 2021 00:53:07 UTC |
	|         | existing-network-20210811005219-1387367           |                                          |          |         |                               |                               |
	| start   | -p                                                | multinode-20210811005307-1387367         | jenkins  | v1.22.0 | Wed, 11 Aug 2021 00:53:07 UTC | Wed, 11 Aug 2021 00:55:01 UTC |
	|         | multinode-20210811005307-1387367                  |                                          |          |         |                               |                               |
	|         | --wait=true --memory=2200                         |                                          |          |         |                               |                               |
	|         | --nodes=2 -v=8 --alsologtostderr                  |                                          |          |         |                               |                               |
	|         | --driver=docker                                   |                                          |          |         |                               |                               |
	|         | --container-runtime=docker                        |                                          |          |         |                               |                               |
	| kubectl | -p multinode-20210811005307-1387367 -- apply -f   | multinode-20210811005307-1387367         | jenkins  | v1.22.0 | Wed, 11 Aug 2021 00:55:02 UTC | Wed, 11 Aug 2021 00:55:02 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                                          |          |         |                               |                               |
	| kubectl | -p multinode-20210811005307-1387367               | multinode-20210811005307-1387367         | jenkins  | v1.22.0 | Wed, 11 Aug 2021 01:05:03 UTC | Wed, 11 Aug 2021 01:05:03 UTC |
	|         | -- get pods -o                                    |                                          |          |         |                               |                               |
	|         | jsonpath='{.items[*].status.podIP}'               |                                          |          |         |                               |                               |
	| kubectl | -p multinode-20210811005307-1387367               | multinode-20210811005307-1387367         | jenkins  | v1.22.0 | Wed, 11 Aug 2021 01:05:03 UTC | Wed, 11 Aug 2021 01:05:03 UTC |
	|         | -- get pods -o                                    |                                          |          |         |                               |                               |
	|         | jsonpath='{.items[*].metadata.name}'              |                                          |          |         |                               |                               |
	|---------|---------------------------------------------------|------------------------------------------|----------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/11 00:53:07
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.16.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0811 00:53:07.230893 1439337 out.go:298] Setting OutFile to fd 1 ...
	I0811 00:53:07.231024 1439337 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0811 00:53:07.231034 1439337 out.go:311] Setting ErrFile to fd 2...
	I0811 00:53:07.231038 1439337 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0811 00:53:07.231170 1439337 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/bin
	I0811 00:53:07.231476 1439337 out.go:305] Setting JSON to false
	I0811 00:53:07.232592 1439337 start.go:111] hostinfo: {"hostname":"ip-172-31-31-251","uptime":38134,"bootTime":1628605053,"procs":472,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0811 00:53:07.232678 1439337 start.go:121] virtualization:  
	I0811 00:53:07.235645 1439337 out.go:177] * [multinode-20210811005307-1387367] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	I0811 00:53:07.238230 1439337 out.go:177]   - MINIKUBE_LOCATION=12230
	I0811 00:53:07.236971 1439337 notify.go:169] Checking for updates...
	I0811 00:53:07.240187 1439337 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	I0811 00:53:07.242264 1439337 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube
	I0811 00:53:07.244351 1439337 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0811 00:53:07.244637 1439337 driver.go:335] Setting default libvirt URI to qemu:///system
	I0811 00:53:07.285517 1439337 docker.go:132] docker version: linux-20.10.8
	I0811 00:53:07.285613 1439337 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0811 00:53:07.395966 1439337 info.go:263] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:46 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:34 SystemTime:2021-08-11 00:53:07.336423317 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226263040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0811 00:53:07.396112 1439337 docker.go:244] overlay module found
	I0811 00:53:07.398556 1439337 out.go:177] * Using the docker driver based on user configuration
	I0811 00:53:07.398593 1439337 start.go:278] selected driver: docker
	I0811 00:53:07.398600 1439337 start.go:751] validating driver "docker" against <nil>
	I0811 00:53:07.398619 1439337 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0811 00:53:07.398679 1439337 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0811 00:53:07.398697 1439337 out.go:242] ! Your cgroup does not allow setting memory.
	I0811 00:53:07.401034 1439337 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0811 00:53:07.401409 1439337 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0811 00:53:07.487345 1439337 info.go:263] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:46 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:34 SystemTime:2021-08-11 00:53:07.429417039 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226263040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0811 00:53:07.487464 1439337 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0811 00:53:07.487627 1439337 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0811 00:53:07.487648 1439337 cni.go:93] Creating CNI manager for ""
	I0811 00:53:07.487654 1439337 cni.go:154] 0 nodes found, recommending kindnet
	I0811 00:53:07.487671 1439337 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0811 00:53:07.487683 1439337 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0811 00:53:07.487688 1439337 start_flags.go:272] Found "CNI" CNI - setting NetworkPlugin=cni
	I0811 00:53:07.487699 1439337 start_flags.go:277] config:
	{Name:multinode-20210811005307-1387367 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:multinode-20210811005307-1387367 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0}
	I0811 00:53:07.490053 1439337 out.go:177] * Starting control plane node multinode-20210811005307-1387367 in cluster multinode-20210811005307-1387367
	I0811 00:53:07.490090 1439337 cache.go:117] Beginning downloading kic base image for docker with docker
	I0811 00:53:07.492672 1439337 out.go:177] * Pulling base image ...
	I0811 00:53:07.492712 1439337 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime docker
	I0811 00:53:07.492763 1439337 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-docker-overlay2-arm64.tar.lz4
	I0811 00:53:07.492775 1439337 cache.go:56] Caching tarball of preloaded images
	I0811 00:53:07.492966 1439337 preload.go:173] Found /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0811 00:53:07.492994 1439337 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on docker
	I0811 00:53:07.493384 1439337 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/config.json ...
	I0811 00:53:07.493423 1439337 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/config.json: {Name:mkfc3ef7858325d4b50a477430c66e7ccebc5920 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 00:53:07.493522 1439337 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	I0811 00:53:07.551182 1439337 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0811 00:53:07.551211 1439337 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0811 00:53:07.551227 1439337 cache.go:205] Successfully downloaded all kic artifacts
	I0811 00:53:07.551265 1439337 start.go:313] acquiring machines lock for multinode-20210811005307-1387367: {Name:mkb3178c18c35426cb33192cdbdabcbff217bc0a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0811 00:53:07.551967 1439337 start.go:317] acquired machines lock for "multinode-20210811005307-1387367" in 675.926µs
	I0811 00:53:07.552002 1439337 start.go:89] Provisioning new machine with config: &{Name:multinode-20210811005307-1387367 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:multinode-20210811005307-1387367 Namespace:default APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0811 00:53:07.552092 1439337 start.go:126] createHost starting for "" (driver="docker")
	I0811 00:53:07.557170 1439337 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0811 00:53:07.557458 1439337 start.go:160] libmachine.API.Create for "multinode-20210811005307-1387367" (driver="docker")
	I0811 00:53:07.557495 1439337 client.go:168] LocalClient.Create starting
	I0811 00:53:07.557565 1439337 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem
	I0811 00:53:07.557600 1439337 main.go:130] libmachine: Decoding PEM data...
	I0811 00:53:07.557622 1439337 main.go:130] libmachine: Parsing certificate...
	I0811 00:53:07.557740 1439337 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem
	I0811 00:53:07.557761 1439337 main.go:130] libmachine: Decoding PEM data...
	I0811 00:53:07.557785 1439337 main.go:130] libmachine: Parsing certificate...
	I0811 00:53:07.558163 1439337 cli_runner.go:115] Run: docker network inspect multinode-20210811005307-1387367 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0811 00:53:07.589516 1439337 cli_runner.go:162] docker network inspect multinode-20210811005307-1387367 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0811 00:53:07.589604 1439337 network_create.go:255] running [docker network inspect multinode-20210811005307-1387367] to gather additional debugging logs...
	I0811 00:53:07.589630 1439337 cli_runner.go:115] Run: docker network inspect multinode-20210811005307-1387367
	W0811 00:53:07.620364 1439337 cli_runner.go:162] docker network inspect multinode-20210811005307-1387367 returned with exit code 1
	I0811 00:53:07.620397 1439337 network_create.go:258] error running [docker network inspect multinode-20210811005307-1387367]: docker network inspect multinode-20210811005307-1387367: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-20210811005307-1387367
	I0811 00:53:07.620422 1439337 network_create.go:260] output of [docker network inspect multinode-20210811005307-1387367]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-20210811005307-1387367
	
	** /stderr **
	I0811 00:53:07.620476 1439337 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0811 00:53:07.652329 1439337 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0x400086ac48] misses:0}
	I0811 00:53:07.652378 1439337 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0811 00:53:07.652395 1439337 network_create.go:106] attempt to create docker network multinode-20210811005307-1387367 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0811 00:53:07.652451 1439337 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20210811005307-1387367
	I0811 00:53:07.719095 1439337 network_create.go:90] docker network multinode-20210811005307-1387367 192.168.49.0/24 created
	I0811 00:53:07.719128 1439337 kic.go:106] calculated static IP "192.168.49.2" for the "multinode-20210811005307-1387367" container
	I0811 00:53:07.719192 1439337 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0811 00:53:07.749649 1439337 cli_runner.go:115] Run: docker volume create multinode-20210811005307-1387367 --label name.minikube.sigs.k8s.io=multinode-20210811005307-1387367 --label created_by.minikube.sigs.k8s.io=true
	I0811 00:53:07.781727 1439337 oci.go:102] Successfully created a docker volume multinode-20210811005307-1387367
	I0811 00:53:07.781826 1439337 cli_runner.go:115] Run: docker run --rm --name multinode-20210811005307-1387367-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-20210811005307-1387367 --entrypoint /usr/bin/test -v multinode-20210811005307-1387367:/var gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -d /var/lib
	I0811 00:53:08.390978 1439337 oci.go:106] Successfully prepared a docker volume multinode-20210811005307-1387367
	W0811 00:53:08.391031 1439337 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0811 00:53:08.391038 1439337 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0811 00:53:08.391115 1439337 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0811 00:53:08.391321 1439337 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime docker
	I0811 00:53:08.391343 1439337 kic.go:179] Starting extracting preloaded images to volume ...
	I0811 00:53:08.391394 1439337 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v multinode-20210811005307-1387367:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir
	I0811 00:53:08.519786 1439337 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-20210811005307-1387367 --name multinode-20210811005307-1387367 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-20210811005307-1387367 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-20210811005307-1387367 --network multinode-20210811005307-1387367 --ip 192.168.49.2 --volume multinode-20210811005307-1387367:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79
	I0811 00:53:09.058814 1439337 cli_runner.go:115] Run: docker container inspect multinode-20210811005307-1387367 --format={{.State.Running}}
	I0811 00:53:09.115523 1439337 cli_runner.go:115] Run: docker container inspect multinode-20210811005307-1387367 --format={{.State.Status}}
	I0811 00:53:09.171781 1439337 cli_runner.go:115] Run: docker exec multinode-20210811005307-1387367 stat /var/lib/dpkg/alternatives/iptables
	I0811 00:53:09.326769 1439337 oci.go:278] the created container "multinode-20210811005307-1387367" has a running status.
	I0811 00:53:09.326799 1439337 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210811005307-1387367/id_rsa...
	I0811 00:53:09.536735 1439337 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210811005307-1387367/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0811 00:53:09.536785 1439337 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210811005307-1387367/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0811 00:53:09.708292 1439337 cli_runner.go:115] Run: docker container inspect multinode-20210811005307-1387367 --format={{.State.Status}}
	I0811 00:53:09.759288 1439337 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0811 00:53:09.759305 1439337 kic_runner.go:115] Args: [docker exec --privileged multinode-20210811005307-1387367 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0811 00:53:18.353799 1439337 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v multinode-20210811005307-1387367:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir: (9.962370689s)
	I0811 00:53:18.353827 1439337 kic.go:188] duration metric: took 9.962481 seconds to extract preloaded images to volume
	I0811 00:53:18.353915 1439337 cli_runner.go:115] Run: docker container inspect multinode-20210811005307-1387367 --format={{.State.Status}}
	I0811 00:53:18.401891 1439337 machine.go:88] provisioning docker machine ...
	I0811 00:53:18.401923 1439337 ubuntu.go:169] provisioning hostname "multinode-20210811005307-1387367"
	I0811 00:53:18.401989 1439337 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210811005307-1387367
	I0811 00:53:18.447406 1439337 main.go:130] libmachine: Using SSH client type: native
	I0811 00:53:18.447602 1439337 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil>  [] 0s} 127.0.0.1 50285 <nil> <nil>}
	I0811 00:53:18.447616 1439337 main.go:130] libmachine: About to run SSH command:
	sudo hostname multinode-20210811005307-1387367 && echo "multinode-20210811005307-1387367" | sudo tee /etc/hostname
	I0811 00:53:18.578026 1439337 main.go:130] libmachine: SSH cmd err, output: <nil>: multinode-20210811005307-1387367
	
	I0811 00:53:18.578124 1439337 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210811005307-1387367
	I0811 00:53:18.621976 1439337 main.go:130] libmachine: Using SSH client type: native
	I0811 00:53:18.622154 1439337 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil>  [] 0s} 127.0.0.1 50285 <nil> <nil>}
	I0811 00:53:18.622175 1439337 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-20210811005307-1387367' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-20210811005307-1387367/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-20210811005307-1387367' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0811 00:53:18.744696 1439337 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0811 00:53:18.744724 1439337 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/k
ey.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube}
	I0811 00:53:18.744743 1439337 ubuntu.go:177] setting up certificates
	I0811 00:53:18.744752 1439337 provision.go:83] configureAuth start
	I0811 00:53:18.744813 1439337 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210811005307-1387367
	I0811 00:53:18.777144 1439337 provision.go:137] copyHostCerts
	I0811 00:53:18.777184 1439337 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem
	I0811 00:53:18.777212 1439337 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem, removing ...
	I0811 00:53:18.777224 1439337 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem
	I0811 00:53:18.777300 1439337 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem (1082 bytes)
	I0811 00:53:18.777379 1439337 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem
	I0811 00:53:18.777409 1439337 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem, removing ...
	I0811 00:53:18.777418 1439337 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem
	I0811 00:53:18.777442 1439337 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem (1123 bytes)
	I0811 00:53:18.777484 1439337 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem
	I0811 00:53:18.777504 1439337 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem, removing ...
	I0811 00:53:18.777513 1439337 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem
	I0811 00:53:18.777533 1439337 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem (1679 bytes)
	I0811 00:53:18.777573 1439337 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem org=jenkins.multinode-20210811005307-1387367 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-20210811005307-1387367]
	I0811 00:53:19.088368 1439337 provision.go:171] copyRemoteCerts
	I0811 00:53:19.088459 1439337 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0811 00:53:19.088517 1439337 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210811005307-1387367
	I0811 00:53:19.120358 1439337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50285 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210811005307-1387367/id_rsa Username:docker}
	I0811 00:53:19.203800 1439337 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0811 00:53:19.203855 1439337 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0811 00:53:19.220584 1439337 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0811 00:53:19.220680 1439337 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0811 00:53:19.237619 1439337 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0811 00:53:19.237670 1439337 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0811 00:53:19.254142 1439337 provision.go:86] duration metric: configureAuth took 509.370926ms
	I0811 00:53:19.254165 1439337 ubuntu.go:193] setting minikube options for container-runtime
	I0811 00:53:19.254377 1439337 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210811005307-1387367
	I0811 00:53:19.286358 1439337 main.go:130] libmachine: Using SSH client type: native
	I0811 00:53:19.286533 1439337 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil>  [] 0s} 127.0.0.1 50285 <nil> <nil>}
	I0811 00:53:19.286551 1439337 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0811 00:53:19.400879 1439337 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0811 00:53:19.400903 1439337 ubuntu.go:71] root file system type: overlay
	I0811 00:53:19.401076 1439337 provision.go:308] Updating docker unit: /lib/systemd/system/docker.service ...
	I0811 00:53:19.401144 1439337 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210811005307-1387367
	I0811 00:53:19.435116 1439337 main.go:130] libmachine: Using SSH client type: native
	I0811 00:53:19.435297 1439337 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil>  [] 0s} 127.0.0.1 50285 <nil> <nil>}
	I0811 00:53:19.435397 1439337 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0811 00:53:19.557864 1439337 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0811 00:53:19.557996 1439337 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210811005307-1387367
	I0811 00:53:19.591634 1439337 main.go:130] libmachine: Using SSH client type: native
	I0811 00:53:19.591806 1439337 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil>  [] 0s} 127.0.0.1 50285 <nil> <nil>}
	I0811 00:53:19.591839 1439337 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0811 00:53:20.493929 1439337 main.go:130] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-06-02 11:55:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2021-08-11 00:53:19.549031023 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	+BindsTo=containerd.service
	 After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0811 00:53:20.493958 1439337 machine.go:91] provisioned docker machine in 2.09204609s
	I0811 00:53:20.493974 1439337 client.go:171] LocalClient.Create took 12.936470185s
	I0811 00:53:20.493998 1439337 start.go:168] duration metric: libmachine.API.Create for "multinode-20210811005307-1387367" took 12.93653983s
	I0811 00:53:20.494013 1439337 start.go:267] post-start starting for "multinode-20210811005307-1387367" (driver="docker")
	I0811 00:53:20.494018 1439337 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0811 00:53:20.494089 1439337 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0811 00:53:20.494135 1439337 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210811005307-1387367
	I0811 00:53:20.531089 1439337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50285 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210811005307-1387367/id_rsa Username:docker}
	I0811 00:53:20.616403 1439337 ssh_runner.go:149] Run: cat /etc/os-release
	I0811 00:53:20.618850 1439337 command_runner.go:124] > NAME="Ubuntu"
	I0811 00:53:20.618868 1439337 command_runner.go:124] > VERSION="20.04.2 LTS (Focal Fossa)"
	I0811 00:53:20.618874 1439337 command_runner.go:124] > ID=ubuntu
	I0811 00:53:20.618879 1439337 command_runner.go:124] > ID_LIKE=debian
	I0811 00:53:20.618886 1439337 command_runner.go:124] > PRETTY_NAME="Ubuntu 20.04.2 LTS"
	I0811 00:53:20.618896 1439337 command_runner.go:124] > VERSION_ID="20.04"
	I0811 00:53:20.618903 1439337 command_runner.go:124] > HOME_URL="https://www.ubuntu.com/"
	I0811 00:53:20.618913 1439337 command_runner.go:124] > SUPPORT_URL="https://help.ubuntu.com/"
	I0811 00:53:20.618921 1439337 command_runner.go:124] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0811 00:53:20.618931 1439337 command_runner.go:124] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0811 00:53:20.618937 1439337 command_runner.go:124] > VERSION_CODENAME=focal
	I0811 00:53:20.618942 1439337 command_runner.go:124] > UBUNTU_CODENAME=focal
	I0811 00:53:20.619199 1439337 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0811 00:53:20.619221 1439337 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0811 00:53:20.619232 1439337 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0811 00:53:20.619244 1439337 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0811 00:53:20.619253 1439337 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/addons for local assets ...
	I0811 00:53:20.619310 1439337 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files for local assets ...
	I0811 00:53:20.619402 1439337 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/13873672.pem -> 13873672.pem in /etc/ssl/certs
	I0811 00:53:20.619413 1439337 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/13873672.pem -> /etc/ssl/certs/13873672.pem
	I0811 00:53:20.619503 1439337 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0811 00:53:20.625943 1439337 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/13873672.pem --> /etc/ssl/certs/13873672.pem (1708 bytes)
	I0811 00:53:20.642886 1439337 start.go:270] post-start completed in 148.85973ms
	I0811 00:53:20.643292 1439337 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210811005307-1387367
	I0811 00:53:20.674817 1439337 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/config.json ...
	I0811 00:53:20.675066 1439337 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0811 00:53:20.675117 1439337 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210811005307-1387367
	I0811 00:53:20.707066 1439337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50285 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210811005307-1387367/id_rsa Username:docker}
	I0811 00:53:20.789063 1439337 command_runner.go:124] > 79%!
	(MISSING)I0811 00:53:20.789095 1439337 start.go:129] duration metric: createHost completed in 13.236994962s
	I0811 00:53:20.789106 1439337 start.go:80] releasing machines lock for "multinode-20210811005307-1387367", held for 13.237121812s
	I0811 00:53:20.789189 1439337 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210811005307-1387367
	I0811 00:53:20.820583 1439337 ssh_runner.go:149] Run: systemctl --version
	I0811 00:53:20.820633 1439337 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210811005307-1387367
	I0811 00:53:20.820636 1439337 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0811 00:53:20.820696 1439337 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210811005307-1387367
	I0811 00:53:20.865413 1439337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50285 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210811005307-1387367/id_rsa Username:docker}
	I0811 00:53:20.881118 1439337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50285 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210811005307-1387367/id_rsa Username:docker}
	I0811 00:53:21.168801 1439337 command_runner.go:124] > <HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
	I0811 00:53:21.168823 1439337 command_runner.go:124] > <TITLE>302 Moved</TITLE></HEAD><BODY>
	I0811 00:53:21.168830 1439337 command_runner.go:124] > <H1>302 Moved</H1>
	I0811 00:53:21.168835 1439337 command_runner.go:124] > The document has moved
	I0811 00:53:21.168844 1439337 command_runner.go:124] > <A HREF="https://cloud.google.com/container-registry/">here</A>.
	I0811 00:53:21.168849 1439337 command_runner.go:124] > </BODY></HTML>
	I0811 00:53:21.168884 1439337 command_runner.go:124] > systemd 245 (245.4-4ubuntu3.7)
	I0811 00:53:21.168908 1439337 command_runner.go:124] > +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid
	I0811 00:53:21.169002 1439337 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0811 00:53:21.177915 1439337 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0811 00:53:21.185867 1439337 command_runner.go:124] > # /lib/systemd/system/docker.service
	I0811 00:53:21.186753 1439337 command_runner.go:124] > [Unit]
	I0811 00:53:21.186788 1439337 command_runner.go:124] > Description=Docker Application Container Engine
	I0811 00:53:21.186795 1439337 command_runner.go:124] > Documentation=https://docs.docker.com
	I0811 00:53:21.186802 1439337 command_runner.go:124] > BindsTo=containerd.service
	I0811 00:53:21.186811 1439337 command_runner.go:124] > After=network-online.target firewalld.service containerd.service
	I0811 00:53:21.186826 1439337 command_runner.go:124] > Wants=network-online.target
	I0811 00:53:21.186832 1439337 command_runner.go:124] > Requires=docker.socket
	I0811 00:53:21.186841 1439337 command_runner.go:124] > StartLimitBurst=3
	I0811 00:53:21.186846 1439337 command_runner.go:124] > StartLimitIntervalSec=60
	I0811 00:53:21.186850 1439337 command_runner.go:124] > [Service]
	I0811 00:53:21.186854 1439337 command_runner.go:124] > Type=notify
	I0811 00:53:21.186859 1439337 command_runner.go:124] > Restart=on-failure
	I0811 00:53:21.186869 1439337 command_runner.go:124] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0811 00:53:21.186884 1439337 command_runner.go:124] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0811 00:53:21.186896 1439337 command_runner.go:124] > # here is to clear out that command inherited from the base configuration. Without this,
	I0811 00:53:21.186908 1439337 command_runner.go:124] > # the command from the base configuration and the command specified here are treated as
	I0811 00:53:21.186918 1439337 command_runner.go:124] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0811 00:53:21.186930 1439337 command_runner.go:124] > # will catch this invalid input and refuse to start the service with an error like:
	I0811 00:53:21.186941 1439337 command_runner.go:124] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0811 00:53:21.186956 1439337 command_runner.go:124] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0811 00:53:21.186966 1439337 command_runner.go:124] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0811 00:53:21.186970 1439337 command_runner.go:124] > ExecStart=
	I0811 00:53:21.187000 1439337 command_runner.go:124] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0811 00:53:21.187010 1439337 command_runner.go:124] > ExecReload=/bin/kill -s HUP $MAINPID
	I0811 00:53:21.187021 1439337 command_runner.go:124] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0811 00:53:21.187034 1439337 command_runner.go:124] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0811 00:53:21.187041 1439337 command_runner.go:124] > LimitNOFILE=infinity
	I0811 00:53:21.187047 1439337 command_runner.go:124] > LimitNPROC=infinity
	I0811 00:53:21.187051 1439337 command_runner.go:124] > LimitCORE=infinity
	I0811 00:53:21.187062 1439337 command_runner.go:124] > # Uncomment TasksMax if your systemd version supports it.
	I0811 00:53:21.187071 1439337 command_runner.go:124] > # Only systemd 226 and above support this version.
	I0811 00:53:21.187076 1439337 command_runner.go:124] > TasksMax=infinity
	I0811 00:53:21.187088 1439337 command_runner.go:124] > TimeoutStartSec=0
	I0811 00:53:21.187097 1439337 command_runner.go:124] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0811 00:53:21.187107 1439337 command_runner.go:124] > Delegate=yes
	I0811 00:53:21.187115 1439337 command_runner.go:124] > # kill only the docker process, not all processes in the cgroup
	I0811 00:53:21.187125 1439337 command_runner.go:124] > KillMode=process
	I0811 00:53:21.187129 1439337 command_runner.go:124] > [Install]
	I0811 00:53:21.187134 1439337 command_runner.go:124] > WantedBy=multi-user.target
	I0811 00:53:21.188214 1439337 cruntime.go:249] skipping containerd shutdown because we are bound to it
	I0811 00:53:21.188297 1439337 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0811 00:53:21.197313 1439337 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0811 00:53:21.208334 1439337 command_runner.go:124] > runtime-endpoint: unix:///var/run/dockershim.sock
	I0811 00:53:21.208358 1439337 command_runner.go:124] > image-endpoint: unix:///var/run/dockershim.sock
	I0811 00:53:21.209738 1439337 ssh_runner.go:149] Run: sudo systemctl unmask docker.service
	I0811 00:53:21.297323 1439337 ssh_runner.go:149] Run: sudo systemctl enable docker.socket
	I0811 00:53:21.372264 1439337 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0811 00:53:21.380510 1439337 command_runner.go:124] > # /lib/systemd/system/docker.service
	I0811 00:53:21.381058 1439337 command_runner.go:124] > [Unit]
	I0811 00:53:21.381097 1439337 command_runner.go:124] > Description=Docker Application Container Engine
	I0811 00:53:21.381133 1439337 command_runner.go:124] > Documentation=https://docs.docker.com
	I0811 00:53:21.381157 1439337 command_runner.go:124] > BindsTo=containerd.service
	I0811 00:53:21.381177 1439337 command_runner.go:124] > After=network-online.target firewalld.service containerd.service
	I0811 00:53:21.381217 1439337 command_runner.go:124] > Wants=network-online.target
	I0811 00:53:21.381239 1439337 command_runner.go:124] > Requires=docker.socket
	I0811 00:53:21.381328 1439337 command_runner.go:124] > StartLimitBurst=3
	I0811 00:53:21.381353 1439337 command_runner.go:124] > StartLimitIntervalSec=60
	I0811 00:53:21.381370 1439337 command_runner.go:124] > [Service]
	I0811 00:53:21.381384 1439337 command_runner.go:124] > Type=notify
	I0811 00:53:21.381413 1439337 command_runner.go:124] > Restart=on-failure
	I0811 00:53:21.381438 1439337 command_runner.go:124] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0811 00:53:21.381460 1439337 command_runner.go:124] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0811 00:53:21.381494 1439337 command_runner.go:124] > # here is to clear out that command inherited from the base configuration. Without this,
	I0811 00:53:21.381518 1439337 command_runner.go:124] > # the command from the base configuration and the command specified here are treated as
	I0811 00:53:21.381539 1439337 command_runner.go:124] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0811 00:53:21.381573 1439337 command_runner.go:124] > # will catch this invalid input and refuse to start the service with an error like:
	I0811 00:53:21.381599 1439337 command_runner.go:124] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0811 00:53:21.381621 1439337 command_runner.go:124] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0811 00:53:21.381653 1439337 command_runner.go:124] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0811 00:53:21.381668 1439337 command_runner.go:124] > ExecStart=
	I0811 00:53:21.381707 1439337 command_runner.go:124] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0811 00:53:21.381739 1439337 command_runner.go:124] > ExecReload=/bin/kill -s HUP $MAINPID
	I0811 00:53:21.381757 1439337 command_runner.go:124] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0811 00:53:21.381770 1439337 command_runner.go:124] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0811 00:53:21.381785 1439337 command_runner.go:124] > LimitNOFILE=infinity
	I0811 00:53:21.381813 1439337 command_runner.go:124] > LimitNPROC=infinity
	I0811 00:53:21.381828 1439337 command_runner.go:124] > LimitCORE=infinity
	I0811 00:53:21.381842 1439337 command_runner.go:124] > # Uncomment TasksMax if your systemd version supports it.
	I0811 00:53:21.381858 1439337 command_runner.go:124] > # Only systemd 226 and above support this version.
	I0811 00:53:21.381863 1439337 command_runner.go:124] > TasksMax=infinity
	I0811 00:53:21.381872 1439337 command_runner.go:124] > TimeoutStartSec=0
	I0811 00:53:21.381882 1439337 command_runner.go:124] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0811 00:53:21.381889 1439337 command_runner.go:124] > Delegate=yes
	I0811 00:53:21.381897 1439337 command_runner.go:124] > # kill only the docker process, not all processes in the cgroup
	I0811 00:53:21.381919 1439337 command_runner.go:124] > KillMode=process
	I0811 00:53:21.381929 1439337 command_runner.go:124] > [Install]
	I0811 00:53:21.381935 1439337 command_runner.go:124] > WantedBy=multi-user.target
	I0811 00:53:21.382224 1439337 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0811 00:53:21.470440 1439337 ssh_runner.go:149] Run: sudo systemctl start docker
	I0811 00:53:21.479436 1439337 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0811 00:53:21.525516 1439337 command_runner.go:124] > 20.10.7
	I0811 00:53:21.528681 1439337 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0811 00:53:21.574595 1439337 command_runner.go:124] > 20.10.7
	I0811 00:53:21.582609 1439337 out.go:204] * Preparing Kubernetes v1.21.3 on Docker 20.10.7 ...
	I0811 00:53:21.582720 1439337 cli_runner.go:115] Run: docker network inspect multinode-20210811005307-1387367 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0811 00:53:21.613380 1439337 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0811 00:53:21.616635 1439337 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0811 00:53:21.625779 1439337 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime docker
	I0811 00:53:21.625847 1439337 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0811 00:53:21.663044 1439337 command_runner.go:124] > k8s.gcr.io/kube-apiserver:v1.21.3
	I0811 00:53:21.663068 1439337 command_runner.go:124] > k8s.gcr.io/kube-proxy:v1.21.3
	I0811 00:53:21.663077 1439337 command_runner.go:124] > k8s.gcr.io/kube-controller-manager:v1.21.3
	I0811 00:53:21.663084 1439337 command_runner.go:124] > k8s.gcr.io/kube-scheduler:v1.21.3
	I0811 00:53:21.663091 1439337 command_runner.go:124] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0811 00:53:21.663097 1439337 command_runner.go:124] > k8s.gcr.io/pause:3.4.1
	I0811 00:53:21.663106 1439337 command_runner.go:124] > kubernetesui/dashboard:v2.1.0
	I0811 00:53:21.663112 1439337 command_runner.go:124] > k8s.gcr.io/coredns/coredns:v1.8.0
	I0811 00:53:21.663118 1439337 command_runner.go:124] > k8s.gcr.io/etcd:3.4.13-0
	I0811 00:53:21.663125 1439337 command_runner.go:124] > kubernetesui/metrics-scraper:v1.0.4
	I0811 00:53:21.663315 1439337 docker.go:535] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.21.3
	k8s.gcr.io/kube-proxy:v1.21.3
	k8s.gcr.io/kube-controller-manager:v1.21.3
	k8s.gcr.io/kube-scheduler:v1.21.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.4.1
	kubernetesui/dashboard:v2.1.0
	k8s.gcr.io/coredns/coredns:v1.8.0
	k8s.gcr.io/etcd:3.4.13-0
	kubernetesui/metrics-scraper:v1.0.4
	
	-- /stdout --
	I0811 00:53:21.663339 1439337 docker.go:466] Images already preloaded, skipping extraction
	I0811 00:53:21.663389 1439337 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0811 00:53:21.697655 1439337 command_runner.go:124] > k8s.gcr.io/kube-apiserver:v1.21.3
	I0811 00:53:21.697676 1439337 command_runner.go:124] > k8s.gcr.io/kube-controller-manager:v1.21.3
	I0811 00:53:21.697682 1439337 command_runner.go:124] > k8s.gcr.io/kube-proxy:v1.21.3
	I0811 00:53:21.697689 1439337 command_runner.go:124] > k8s.gcr.io/kube-scheduler:v1.21.3
	I0811 00:53:21.697696 1439337 command_runner.go:124] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0811 00:53:21.697702 1439337 command_runner.go:124] > k8s.gcr.io/pause:3.4.1
	I0811 00:53:21.697707 1439337 command_runner.go:124] > kubernetesui/dashboard:v2.1.0
	I0811 00:53:21.697714 1439337 command_runner.go:124] > k8s.gcr.io/coredns/coredns:v1.8.0
	I0811 00:53:21.697719 1439337 command_runner.go:124] > k8s.gcr.io/etcd:3.4.13-0
	I0811 00:53:21.697726 1439337 command_runner.go:124] > kubernetesui/metrics-scraper:v1.0.4
	I0811 00:53:21.700611 1439337 docker.go:535] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.21.3
	k8s.gcr.io/kube-controller-manager:v1.21.3
	k8s.gcr.io/kube-proxy:v1.21.3
	k8s.gcr.io/kube-scheduler:v1.21.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.4.1
	kubernetesui/dashboard:v2.1.0
	k8s.gcr.io/coredns/coredns:v1.8.0
	k8s.gcr.io/etcd:3.4.13-0
	kubernetesui/metrics-scraper:v1.0.4
	
	-- /stdout --
	I0811 00:53:21.700631 1439337 cache_images.go:74] Images are preloaded, skipping loading
	I0811 00:53:21.700685 1439337 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}}
	I0811 00:53:21.784629 1439337 command_runner.go:124] > cgroupfs
	I0811 00:53:21.787914 1439337 cni.go:93] Creating CNI manager for ""
	I0811 00:53:21.787932 1439337 cni.go:154] 1 nodes found, recommending kindnet
	I0811 00:53:21.787947 1439337 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0811 00:53:21.787965 1439337 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-20210811005307-1387367 NodeName:multinode-20210811005307-1387367 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile
:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0811 00:53:21.788104 1439337 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "multinode-20210811005307-1387367"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0811 00:53:21.788190 1439337 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=multinode-20210811005307-1387367 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:multinode-20210811005307-1387367 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0811 00:53:21.788260 1439337 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0811 00:53:21.794232 1439337 command_runner.go:124] > kubeadm
	I0811 00:53:21.794247 1439337 command_runner.go:124] > kubectl
	I0811 00:53:21.794251 1439337 command_runner.go:124] > kubelet
	I0811 00:53:21.795113 1439337 binaries.go:44] Found k8s binaries, skipping transfer
	I0811 00:53:21.795173 1439337 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0811 00:53:21.801536 1439337 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (410 bytes)
	I0811 00:53:21.814244 1439337 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0811 00:53:21.826825 1439337 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2075 bytes)
	I0811 00:53:21.839628 1439337 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0811 00:53:21.842527 1439337 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0811 00:53:21.850908 1439337 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367 for IP: 192.168.49.2
	I0811 00:53:21.850961 1439337 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key
	I0811 00:53:21.850979 1439337 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key
	I0811 00:53:21.851044 1439337 certs.go:294] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/client.key
	I0811 00:53:21.851054 1439337 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/client.crt with IP's: []
	I0811 00:53:22.505827 1439337 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/client.crt ...
	I0811 00:53:22.505863 1439337 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/client.crt: {Name:mk58f59506cf4b15ae5dff9968b342b9b4dd6dde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 00:53:22.506102 1439337 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/client.key ...
	I0811 00:53:22.506121 1439337 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/client.key: {Name:mk2a7a5ac082a1beab738847cb0aefdb72ccf8a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 00:53:22.506221 1439337 certs.go:294] generating minikube signed cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/apiserver.key.dd3b5fb2
	I0811 00:53:22.506233 1439337 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0811 00:53:22.699824 1439337 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/apiserver.crt.dd3b5fb2 ...
	I0811 00:53:22.699859 1439337 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/apiserver.crt.dd3b5fb2: {Name:mkbf0e6641b314b363b4d83a714510687c837e0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 00:53:22.700629 1439337 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/apiserver.key.dd3b5fb2 ...
	I0811 00:53:22.700646 1439337 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/apiserver.key.dd3b5fb2: {Name:mk92f9412a610334bc78bfca72003203465eeffa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 00:53:22.700737 1439337 certs.go:305] copying /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/apiserver.crt
	I0811 00:53:22.700804 1439337 certs.go:309] copying /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/apiserver.key
	I0811 00:53:22.700853 1439337 certs.go:294] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/proxy-client.key
	I0811 00:53:22.700864 1439337 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/proxy-client.crt with IP's: []
	I0811 00:53:23.020186 1439337 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/proxy-client.crt ...
	I0811 00:53:23.020224 1439337 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/proxy-client.crt: {Name:mk272d910979ad8934befd818ab4132904463f9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 00:53:23.020428 1439337 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/proxy-client.key ...
	I0811 00:53:23.020442 1439337 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/proxy-client.key: {Name:mkea646513b6d583b727a9ece0d7a3b48dc4aa12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 00:53:23.020535 1439337 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0811 00:53:23.020558 1439337 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0811 00:53:23.020576 1439337 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0811 00:53:23.020591 1439337 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0811 00:53:23.020608 1439337 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0811 00:53:23.020622 1439337 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0811 00:53:23.020637 1439337 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0811 00:53:23.020647 1439337 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0811 00:53:23.020701 1439337 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/1387367.pem (1338 bytes)
	W0811 00:53:23.020741 1439337 certs.go:369] ignoring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/1387367_empty.pem, impossibly tiny 0 bytes
	I0811 00:53:23.020755 1439337 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem (1675 bytes)
	I0811 00:53:23.020792 1439337 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem (1082 bytes)
	I0811 00:53:23.020819 1439337 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem (1123 bytes)
	I0811 00:53:23.020846 1439337 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem (1679 bytes)
	I0811 00:53:23.020892 1439337 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/13873672.pem (1708 bytes)
	I0811 00:53:23.020924 1439337 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/1387367.pem -> /usr/share/ca-certificates/1387367.pem
	I0811 00:53:23.020939 1439337 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/13873672.pem -> /usr/share/ca-certificates/13873672.pem
	I0811 00:53:23.020950 1439337 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0811 00:53:23.022022 1439337 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0811 00:53:23.038818 1439337 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0811 00:53:23.055261 1439337 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0811 00:53:23.071449 1439337 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0811 00:53:23.087530 1439337 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0811 00:53:23.103450 1439337 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0811 00:53:23.119851 1439337 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0811 00:53:23.136896 1439337 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0811 00:53:23.153418 1439337 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/1387367.pem --> /usr/share/ca-certificates/1387367.pem (1338 bytes)
	I0811 00:53:23.169842 1439337 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/13873672.pem --> /usr/share/ca-certificates/13873672.pem (1708 bytes)
	I0811 00:53:23.186461 1439337 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0811 00:53:23.202735 1439337 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0811 00:53:23.214909 1439337 ssh_runner.go:149] Run: openssl version
	I0811 00:53:23.219675 1439337 command_runner.go:124] > OpenSSL 1.1.1f  31 Mar 2020
	I0811 00:53:23.219752 1439337 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13873672.pem && ln -fs /usr/share/ca-certificates/13873672.pem /etc/ssl/certs/13873672.pem"
	I0811 00:53:23.226770 1439337 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/13873672.pem
	I0811 00:53:23.229514 1439337 command_runner.go:124] > -rw-r--r-- 1 root root 1708 Aug 11 00:46 /usr/share/ca-certificates/13873672.pem
	I0811 00:53:23.229748 1439337 certs.go:416] hashing: -rw-r--r-- 1 root root 1708 Aug 11 00:46 /usr/share/ca-certificates/13873672.pem
	I0811 00:53:23.229797 1439337 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13873672.pem
	I0811 00:53:23.234167 1439337 command_runner.go:124] > 3ec20f2e
	I0811 00:53:23.234563 1439337 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/13873672.pem /etc/ssl/certs/3ec20f2e.0"
	I0811 00:53:23.241485 1439337 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0811 00:53:23.248101 1439337 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0811 00:53:23.250773 1439337 command_runner.go:124] > -rw-r--r-- 1 root root 1111 Aug 11 00:30 /usr/share/ca-certificates/minikubeCA.pem
	I0811 00:53:23.251053 1439337 certs.go:416] hashing: -rw-r--r-- 1 root root 1111 Aug 11 00:30 /usr/share/ca-certificates/minikubeCA.pem
	I0811 00:53:23.251096 1439337 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0811 00:53:23.255479 1439337 command_runner.go:124] > b5213941
	I0811 00:53:23.255852 1439337 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0811 00:53:23.262757 1439337 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1387367.pem && ln -fs /usr/share/ca-certificates/1387367.pem /etc/ssl/certs/1387367.pem"
	I0811 00:53:23.269510 1439337 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1387367.pem
	I0811 00:53:23.272214 1439337 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Aug 11 00:46 /usr/share/ca-certificates/1387367.pem
	I0811 00:53:23.272458 1439337 certs.go:416] hashing: -rw-r--r-- 1 root root 1338 Aug 11 00:46 /usr/share/ca-certificates/1387367.pem
	I0811 00:53:23.272501 1439337 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1387367.pem
	I0811 00:53:23.277002 1439337 command_runner.go:124] > 51391683
	I0811 00:53:23.277586 1439337 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1387367.pem /etc/ssl/certs/51391683.0"
	I0811 00:53:23.284533 1439337 kubeadm.go:390] StartCluster: {Name:multinode-20210811005307-1387367 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:multinode-20210811005307-1387367 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServ
erIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0}
	I0811 00:53:23.284684 1439337 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0811 00:53:23.320406 1439337 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0811 00:53:23.327257 1439337 command_runner.go:124] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0811 00:53:23.327319 1439337 command_runner.go:124] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0811 00:53:23.327339 1439337 command_runner.go:124] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0811 00:53:23.327424 1439337 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0811 00:53:23.334037 1439337 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0811 00:53:23.334121 1439337 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0811 00:53:23.340598 1439337 command_runner.go:124] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0811 00:53:23.340654 1439337 command_runner.go:124] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0811 00:53:23.340671 1439337 command_runner.go:124] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0811 00:53:23.340682 1439337 command_runner.go:124] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0811 00:53:23.340709 1439337 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0811 00:53:23.340746 1439337 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0811 00:53:23.489668 1439337 command_runner.go:124] > [init] Using Kubernetes version: v1.21.3
	I0811 00:53:23.489782 1439337 command_runner.go:124] > [preflight] Running pre-flight checks
	I0811 00:53:23.775568 1439337 command_runner.go:124] > [preflight] The system verification failed. Printing the output from the verification:
	I0811 00:53:23.775679 1439337 command_runner.go:124] > KERNEL_VERSION: 5.8.0-1041-aws
	I0811 00:53:23.775763 1439337 command_runner.go:124] > DOCKER_VERSION: 20.10.7
	I0811 00:53:23.775842 1439337 command_runner.go:124] > DOCKER_GRAPH_DRIVER: overlay2
	I0811 00:53:23.775917 1439337 command_runner.go:124] > OS: Linux
	I0811 00:53:23.776004 1439337 command_runner.go:124] > CGROUPS_CPU: enabled
	I0811 00:53:23.776094 1439337 command_runner.go:124] > CGROUPS_CPUACCT: enabled
	I0811 00:53:23.776169 1439337 command_runner.go:124] > CGROUPS_CPUSET: enabled
	I0811 00:53:23.776255 1439337 command_runner.go:124] > CGROUPS_DEVICES: enabled
	I0811 00:53:23.776330 1439337 command_runner.go:124] > CGROUPS_FREEZER: enabled
	I0811 00:53:23.776410 1439337 command_runner.go:124] > CGROUPS_MEMORY: enabled
	I0811 00:53:23.776480 1439337 command_runner.go:124] > CGROUPS_PIDS: enabled
	I0811 00:53:23.776559 1439337 command_runner.go:124] > CGROUPS_HUGETLB: enabled
	I0811 00:53:23.862226 1439337 command_runner.go:124] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0811 00:53:23.862377 1439337 command_runner.go:124] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0811 00:53:23.862507 1439337 command_runner.go:124] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0811 00:53:24.101454 1439337 out.go:204]   - Generating certificates and keys ...
	I0811 00:53:24.098465 1439337 command_runner.go:124] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0811 00:53:24.101714 1439337 command_runner.go:124] > [certs] Using existing ca certificate authority
	I0811 00:53:24.101834 1439337 command_runner.go:124] > [certs] Using existing apiserver certificate and key on disk
	I0811 00:53:24.489620 1439337 command_runner.go:124] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0811 00:53:24.745160 1439337 command_runner.go:124] > [certs] Generating "front-proxy-ca" certificate and key
	I0811 00:53:25.328492 1439337 command_runner.go:124] > [certs] Generating "front-proxy-client" certificate and key
	I0811 00:53:25.617819 1439337 command_runner.go:124] > [certs] Generating "etcd/ca" certificate and key
	I0811 00:53:25.976748 1439337 command_runner.go:124] > [certs] Generating "etcd/server" certificate and key
	I0811 00:53:25.977134 1439337 command_runner.go:124] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-20210811005307-1387367] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0811 00:53:26.549399 1439337 command_runner.go:124] > [certs] Generating "etcd/peer" certificate and key
	I0811 00:53:26.549791 1439337 command_runner.go:124] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-20210811005307-1387367] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0811 00:53:26.729655 1439337 command_runner.go:124] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0811 00:53:27.895229 1439337 command_runner.go:124] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0811 00:53:28.530493 1439337 command_runner.go:124] > [certs] Generating "sa" key and public key
	I0811 00:53:28.530825 1439337 command_runner.go:124] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0811 00:53:28.809125 1439337 command_runner.go:124] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0811 00:53:29.081586 1439337 command_runner.go:124] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0811 00:53:29.542201 1439337 command_runner.go:124] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0811 00:53:30.169254 1439337 command_runner.go:124] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0811 00:53:30.181487 1439337 command_runner.go:124] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0811 00:53:30.183215 1439337 command_runner.go:124] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0811 00:53:30.183269 1439337 command_runner.go:124] > [kubelet-start] Starting the kubelet
	I0811 00:53:30.279607 1439337 out.go:204]   - Booting up control plane ...
	I0811 00:53:30.277515 1439337 command_runner.go:124] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0811 00:53:30.279718 1439337 command_runner.go:124] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0811 00:53:30.289376 1439337 command_runner.go:124] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0811 00:53:30.295581 1439337 command_runner.go:124] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0811 00:53:30.296521 1439337 command_runner.go:124] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0811 00:53:30.299349 1439337 command_runner.go:124] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0811 00:53:45.804322 1439337 command_runner.go:124] > [apiclient] All control plane components are healthy after 15.503567 seconds
	I0811 00:53:45.804450 1439337 command_runner.go:124] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0811 00:53:45.815410 1439337 command_runner.go:124] > [kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
	I0811 00:53:46.342832 1439337 command_runner.go:124] > [upload-certs] Skipping phase. Please see --upload-certs
	I0811 00:53:46.343111 1439337 command_runner.go:124] > [mark-control-plane] Marking the node multinode-20210811005307-1387367 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0811 00:53:46.857125 1439337 out.go:204]   - Configuring RBAC rules ...
	I0811 00:53:46.854752 1439337 command_runner.go:124] > [bootstrap-token] Using token: wm9z73.gxcefuuupq33tt64
	I0811 00:53:46.857266 1439337 command_runner.go:124] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0811 00:53:46.861951 1439337 command_runner.go:124] > [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0811 00:53:46.870610 1439337 command_runner.go:124] > [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0811 00:53:46.873601 1439337 command_runner.go:124] > [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0811 00:53:46.876487 1439337 command_runner.go:124] > [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0811 00:53:46.879446 1439337 command_runner.go:124] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0811 00:53:46.890177 1439337 command_runner.go:124] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0811 00:53:47.205330 1439337 command_runner.go:124] > [addons] Applied essential addon: CoreDNS
	I0811 00:53:47.289378 1439337 command_runner.go:124] > [addons] Applied essential addon: kube-proxy
	I0811 00:53:47.289462 1439337 command_runner.go:124] > Your Kubernetes control-plane has initialized successfully!
	I0811 00:53:47.289552 1439337 command_runner.go:124] > To start using your cluster, you need to run the following as a regular user:
	I0811 00:53:47.289585 1439337 command_runner.go:124] >   mkdir -p $HOME/.kube
	I0811 00:53:47.289654 1439337 command_runner.go:124] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0811 00:53:47.289714 1439337 command_runner.go:124] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0811 00:53:47.289779 1439337 command_runner.go:124] > Alternatively, if you are the root user, you can run:
	I0811 00:53:47.289835 1439337 command_runner.go:124] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0811 00:53:47.289898 1439337 command_runner.go:124] > You should now deploy a pod network to the cluster.
	I0811 00:53:47.289984 1439337 command_runner.go:124] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0811 00:53:47.290063 1439337 command_runner.go:124] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0811 00:53:47.290160 1439337 command_runner.go:124] > You can now join any number of control-plane nodes by copying certificate authorities
	I0811 00:53:47.290249 1439337 command_runner.go:124] > and service account keys on each node and then running the following as root:
	I0811 00:53:47.290349 1439337 command_runner.go:124] >   kubeadm join control-plane.minikube.internal:8443 --token wm9z73.gxcefuuupq33tt64 \
	I0811 00:53:47.290465 1439337 command_runner.go:124] > 	--discovery-token-ca-cert-hash sha256:de7b801124e562bd66867fe5271994d6be7651a35fa31dfce01acdef2a9271b2 \
	I0811 00:53:47.290488 1439337 command_runner.go:124] > 	--control-plane 
	I0811 00:53:47.290583 1439337 command_runner.go:124] > Then you can join any number of worker nodes by running the following on each as root:
	I0811 00:53:47.290676 1439337 command_runner.go:124] > kubeadm join control-plane.minikube.internal:8443 --token wm9z73.gxcefuuupq33tt64 \
	I0811 00:53:47.290789 1439337 command_runner.go:124] > 	--discovery-token-ca-cert-hash sha256:de7b801124e562bd66867fe5271994d6be7651a35fa31dfce01acdef2a9271b2 
	I0811 00:53:47.299094 1439337 command_runner.go:124] ! 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0811 00:53:47.299479 1439337 command_runner.go:124] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.8.0-1041-aws\n", err: exit status 1
	I0811 00:53:47.299661 1439337 command_runner.go:124] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0811 00:53:47.299700 1439337 cni.go:93] Creating CNI manager for ""
	I0811 00:53:47.299713 1439337 cni.go:154] 1 nodes found, recommending kindnet
	I0811 00:53:47.302166 1439337 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0811 00:53:47.302231 1439337 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0811 00:53:47.309328 1439337 command_runner.go:124] >   File: /opt/cni/bin/portmap
	I0811 00:53:47.309350 1439337 command_runner.go:124] >   Size: 2603192   	Blocks: 5088       IO Block: 4096   regular file
	I0811 00:53:47.309359 1439337 command_runner.go:124] > Device: 3fh/63d	Inode: 2356928     Links: 1
	I0811 00:53:47.309368 1439337 command_runner.go:124] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0811 00:53:47.309374 1439337 command_runner.go:124] > Access: 2021-02-10 15:18:15.000000000 +0000
	I0811 00:53:47.309384 1439337 command_runner.go:124] > Modify: 2021-02-10 15:18:15.000000000 +0000
	I0811 00:53:47.309390 1439337 command_runner.go:124] > Change: 2021-07-02 14:49:52.887930340 +0000
	I0811 00:53:47.309395 1439337 command_runner.go:124] >  Birth: -
	I0811 00:53:47.309634 1439337 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0811 00:53:47.309648 1439337 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0811 00:53:47.332949 1439337 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0811 00:53:47.968060 1439337 command_runner.go:124] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0811 00:53:47.973734 1439337 command_runner.go:124] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0811 00:53:47.995686 1439337 command_runner.go:124] > serviceaccount/kindnet created
	I0811 00:53:48.003424 1439337 command_runner.go:124] > daemonset.apps/kindnet created
	I0811 00:53:48.008817 1439337 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0811 00:53:48.008932 1439337 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:53:48.008982 1439337 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=877a5691753f15214a0c269ac69dcdc5a4d99fcd minikube.k8s.io/name=multinode-20210811005307-1387367 minikube.k8s.io/updated_at=2021_08_11T00_53_48_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:53:48.025684 1439337 command_runner.go:124] > -16
	I0811 00:53:48.025759 1439337 ops.go:34] apiserver oom_adj: -16
	I0811 00:53:48.182751 1439337 command_runner.go:124] > node/multinode-20210811005307-1387367 labeled
	I0811 00:53:48.182805 1439337 command_runner.go:124] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0811 00:53:48.182892 1439337 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:53:48.269489 1439337 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0811 00:53:48.770253 1439337 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:53:48.855219 1439337 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0811 00:53:49.269787 1439337 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:53:49.355880 1439337 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0811 00:53:49.770563 1439337 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:53:49.851997 1439337 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0811 00:53:50.270377 1439337 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:53:50.358188 1439337 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0811 00:53:50.769718 1439337 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:53:50.857668 1439337 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0811 00:53:51.270183 1439337 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:53:51.398578 1439337 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0811 00:53:51.770197 1439337 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:53:51.853748 1439337 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0811 00:53:52.270541 1439337 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:53:52.364693 1439337 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0811 00:53:52.770210 1439337 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:53:52.859953 1439337 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0811 00:53:53.270549 1439337 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:53:53.360996 1439337 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0811 00:53:53.770505 1439337 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:53:53.862013 1439337 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0811 00:53:54.270508 1439337 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:53:54.366704 1439337 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0811 00:53:54.770339 1439337 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:53:54.849916 1439337 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0811 00:53:55.269950 1439337 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:53:55.361500 1439337 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0811 00:53:55.769737 1439337 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:53:55.854548 1439337 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0811 00:53:56.269737 1439337 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:53:56.367685 1439337 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0811 00:53:56.770203 1439337 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:53:56.858192 1439337 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0811 00:53:57.269663 1439337 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:53:57.367457 1439337 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0811 00:53:57.770035 1439337 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:53:57.858672 1439337 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0811 00:53:58.270192 1439337 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:53:58.423406 1439337 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0811 00:53:58.770740 1439337 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:53:58.866235 1439337 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0811 00:53:59.269712 1439337 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:53:59.371686 1439337 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0811 00:53:59.770197 1439337 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:53:59.896603 1439337 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0811 00:54:00.270186 1439337 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:54:00.369297 1439337 command_runner.go:124] > NAME      SECRETS   AGE
	I0811 00:54:00.369319 1439337 command_runner.go:124] > default   1         0s
	I0811 00:54:00.369338 1439337 kubeadm.go:985] duration metric: took 12.360452191s to wait for elevateKubeSystemPrivileges.
	I0811 00:54:00.369355 1439337 kubeadm.go:392] StartCluster complete in 37.08482894s
	I0811 00:54:00.369373 1439337 settings.go:142] acquiring lock: {Name:mk6e7f1e95cc0d18801bf31166529399345d1e40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 00:54:00.369456 1439337 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	I0811 00:54:00.370517 1439337 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig: {Name:mka174137207b71bb699e0c641682c96161f87c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 00:54:00.370999 1439337 loader.go:372] Config loaded from file:  /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	I0811 00:54:00.371278 1439337 kapi.go:59] client config for multinode-20210811005307-1387367: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-202
10811005307-1387367/client.key", CAFile:"/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1115760), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0811 00:54:00.372857 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0811 00:54:00.372881 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:00.372887 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:00.372892 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:00.373108 1439337 cert_rotation.go:137] Starting client certificate rotation controller
	I0811 00:54:00.402964 1439337 round_trippers.go:457] Response Status: 200 OK in 30 milliseconds
	I0811 00:54:00.402985 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:00.402991 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:00.402997 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:00.403000 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:00.403004 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:00.403008 1439337 round_trippers.go:463]     Content-Length: 291
	I0811 00:54:00.403011 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:00 GMT
	I0811 00:54:00.403042 1439337 request.go:1123] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"00410e73-f241-43ef-b4a8-7c53dde0739d","resourceVersion":"413","creationTimestamp":"2021-08-11T00:53:47Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0811 00:54:00.403741 1439337 request.go:1123] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"00410e73-f241-43ef-b4a8-7c53dde0739d","resourceVersion":"413","creationTimestamp":"2021-08-11T00:53:47Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0811 00:54:00.403787 1439337 round_trippers.go:432] PUT https://192.168.49.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0811 00:54:00.403794 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:00.403799 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:00.403803 1439337 round_trippers.go:442]     Content-Type: application/json
	I0811 00:54:00.403807 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:00.407774 1439337 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0811 00:54:00.407825 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:00.407837 1439337 round_trippers.go:463]     Content-Length: 291
	I0811 00:54:00.407842 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:00 GMT
	I0811 00:54:00.407845 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:00.407848 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:00.407852 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:00.407856 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:00.407874 1439337 request.go:1123] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"00410e73-f241-43ef-b4a8-7c53dde0739d","resourceVersion":"416","creationTimestamp":"2021-08-11T00:53:47Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0811 00:54:00.908742 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0811 00:54:00.908770 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:00.908776 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:00.908781 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:00.911151 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:00.911225 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:00.911237 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:00.911242 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:00.911247 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:00.911256 1439337 round_trippers.go:463]     Content-Length: 291
	I0811 00:54:00.911265 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:00 GMT
	I0811 00:54:00.911273 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:00.911306 1439337 request.go:1123] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"00410e73-f241-43ef-b4a8-7c53dde0739d","resourceVersion":"458","creationTimestamp":"2021-08-11T00:53:47Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0811 00:54:00.911392 1439337 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "multinode-20210811005307-1387367" rescaled to 1
	I0811 00:54:00.911468 1439337 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0811 00:54:00.913951 1439337 out.go:177] * Verifying Kubernetes components...
	I0811 00:54:00.911647 1439337 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0811 00:54:00.911807 1439337 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0811 00:54:00.914135 1439337 addons.go:59] Setting storage-provisioner=true in profile "multinode-20210811005307-1387367"
	I0811 00:54:00.914151 1439337 addons.go:135] Setting addon storage-provisioner=true in "multinode-20210811005307-1387367"
	W0811 00:54:00.914157 1439337 addons.go:147] addon storage-provisioner should already be in state true
	I0811 00:54:00.914182 1439337 host.go:66] Checking if "multinode-20210811005307-1387367" exists ...
	I0811 00:54:00.914722 1439337 cli_runner.go:115] Run: docker container inspect multinode-20210811005307-1387367 --format={{.State.Status}}
	I0811 00:54:00.914888 1439337 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0811 00:54:00.914954 1439337 addons.go:59] Setting default-storageclass=true in profile "multinode-20210811005307-1387367"
	I0811 00:54:00.914970 1439337 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-20210811005307-1387367"
	I0811 00:54:00.915356 1439337 cli_runner.go:115] Run: docker container inspect multinode-20210811005307-1387367 --format={{.State.Status}}
	I0811 00:54:00.972127 1439337 loader.go:372] Config loaded from file:  /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	I0811 00:54:00.972410 1439337 kapi.go:59] client config for multinode-20210811005307-1387367: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-202
10811005307-1387367/client.key", CAFile:"/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1115760), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0811 00:54:00.973770 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0811 00:54:00.973791 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:00.973796 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:00.973801 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:00.976125 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:00.976145 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:00.976149 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:00.976153 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:00.976157 1439337 round_trippers.go:463]     Content-Length: 109
	I0811 00:54:00.976160 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:00 GMT
	I0811 00:54:00.976164 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:00.976168 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:00.976186 1439337 request.go:1123] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"458"},"items":[]}
	I0811 00:54:00.976870 1439337 addons.go:135] Setting addon default-storageclass=true in "multinode-20210811005307-1387367"
	W0811 00:54:00.976891 1439337 addons.go:147] addon default-storageclass should already be in state true
	I0811 00:54:00.976915 1439337 host.go:66] Checking if "multinode-20210811005307-1387367" exists ...
	I0811 00:54:00.977467 1439337 cli_runner.go:115] Run: docker container inspect multinode-20210811005307-1387367 --format={{.State.Status}}
	I0811 00:54:01.014733 1439337 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0811 00:54:01.014853 1439337 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0811 00:54:01.014863 1439337 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0811 00:54:01.014930 1439337 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210811005307-1387367
	I0811 00:54:01.064966 1439337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50285 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210811005307-1387367/id_rsa Username:docker}
	I0811 00:54:01.075831 1439337 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0811 00:54:01.075856 1439337 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0811 00:54:01.075919 1439337 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210811005307-1387367
	I0811 00:54:01.124872 1439337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50285 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210811005307-1387367/id_rsa Username:docker}
	I0811 00:54:01.176492 1439337 command_runner.go:124] > apiVersion: v1
	I0811 00:54:01.176515 1439337 command_runner.go:124] > data:
	I0811 00:54:01.176520 1439337 command_runner.go:124] >   Corefile: |
	I0811 00:54:01.176525 1439337 command_runner.go:124] >     .:53 {
	I0811 00:54:01.176530 1439337 command_runner.go:124] >         errors
	I0811 00:54:01.176535 1439337 command_runner.go:124] >         health {
	I0811 00:54:01.176541 1439337 command_runner.go:124] >            lameduck 5s
	I0811 00:54:01.176545 1439337 command_runner.go:124] >         }
	I0811 00:54:01.176551 1439337 command_runner.go:124] >         ready
	I0811 00:54:01.176565 1439337 command_runner.go:124] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0811 00:54:01.176577 1439337 command_runner.go:124] >            pods insecure
	I0811 00:54:01.176584 1439337 command_runner.go:124] >            fallthrough in-addr.arpa ip6.arpa
	I0811 00:54:01.176595 1439337 command_runner.go:124] >            ttl 30
	I0811 00:54:01.176599 1439337 command_runner.go:124] >         }
	I0811 00:54:01.176610 1439337 command_runner.go:124] >         prometheus :9153
	I0811 00:54:01.176616 1439337 command_runner.go:124] >         forward . /etc/resolv.conf {
	I0811 00:54:01.176627 1439337 command_runner.go:124] >            max_concurrent 1000
	I0811 00:54:01.176632 1439337 command_runner.go:124] >         }
	I0811 00:54:01.176637 1439337 command_runner.go:124] >         cache 30
	I0811 00:54:01.176642 1439337 command_runner.go:124] >         loop
	I0811 00:54:01.176649 1439337 command_runner.go:124] >         reload
	I0811 00:54:01.176657 1439337 command_runner.go:124] >         loadbalance
	I0811 00:54:01.176669 1439337 command_runner.go:124] >     }
	I0811 00:54:01.176675 1439337 command_runner.go:124] > kind: ConfigMap
	I0811 00:54:01.176685 1439337 command_runner.go:124] > metadata:
	I0811 00:54:01.176700 1439337 command_runner.go:124] >   creationTimestamp: "2021-08-11T00:53:47Z"
	I0811 00:54:01.176708 1439337 command_runner.go:124] >   name: coredns
	I0811 00:54:01.176714 1439337 command_runner.go:124] >   namespace: kube-system
	I0811 00:54:01.176725 1439337 command_runner.go:124] >   resourceVersion: "267"
	I0811 00:54:01.176731 1439337 command_runner.go:124] >   uid: 43a5565a-2909-4945-9fef-505f8754f208
	I0811 00:54:01.179090 1439337 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0811 00:54:01.179534 1439337 loader.go:372] Config loaded from file:  /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	I0811 00:54:01.179818 1439337 kapi.go:59] client config for multinode-20210811005307-1387367: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-202
10811005307-1387367/client.key", CAFile:"/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1115760), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0811 00:54:01.181119 1439337 node_ready.go:35] waiting up to 6m0s for node "multinode-20210811005307-1387367" to be "Ready" ...
	I0811 00:54:01.181195 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:01.181208 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:01.181213 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:01.181218 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:01.183374 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:01.183392 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:01.183397 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:01.183401 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:01.183405 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:01.183412 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:01.183416 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:01 GMT
	I0811 00:54:01.183636 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:01.264182 1439337 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0811 00:54:01.288198 1439337 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0811 00:54:01.675248 1439337 command_runner.go:124] > configmap/coredns replaced
	I0811 00:54:01.685971 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:01.686036 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:01.686056 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:01.686073 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:01.688180 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:01.688238 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:01.688254 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:01.688268 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:01.688282 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:01.688310 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:01.688329 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:01 GMT
	I0811 00:54:01.688474 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:01.689842 1439337 start.go:736] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0811 00:54:01.764877 1439337 command_runner.go:124] > serviceaccount/storage-provisioner created
	I0811 00:54:01.773913 1439337 command_runner.go:124] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0811 00:54:01.784696 1439337 command_runner.go:124] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0811 00:54:01.791154 1439337 command_runner.go:124] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0811 00:54:01.800648 1439337 command_runner.go:124] > endpoints/k8s.io-minikube-hostpath created
	I0811 00:54:01.814770 1439337 command_runner.go:124] > pod/storage-provisioner created
	I0811 00:54:01.821224 1439337 command_runner.go:124] > storageclass.storage.k8s.io/standard created
	I0811 00:54:01.824320 1439337 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0811 00:54:01.824350 1439337 addons.go:344] enableAddons completed in 912.55125ms
	I0811 00:54:02.185368 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:02.185399 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:02.185407 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:02.185412 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:02.187600 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:02.187622 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:02.187627 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:02.187631 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:02.187635 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:02.187638 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:02.187642 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:02 GMT
	I0811 00:54:02.187757 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:02.685270 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:02.685295 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:02.685301 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:02.685306 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:02.687927 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:02.687977 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:02.687994 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:02.688010 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:02 GMT
	I0811 00:54:02.688026 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:02.688050 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:02.688067 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:02.688232 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:03.185505 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:03.185527 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:03.185534 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:03.185539 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:03.187798 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:03.187817 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:03.187823 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:03.187829 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:03.187833 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:03.187837 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:03 GMT
	I0811 00:54:03.187841 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:03.188193 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:03.188475 1439337 node_ready.go:58] node "multinode-20210811005307-1387367" has status "Ready":"False"
	I0811 00:54:03.685397 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:03.685455 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:03.685473 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:03.685489 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:03.688335 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:03.688352 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:03.688357 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:03.688361 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:03.688364 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:03.688368 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:03.688371 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:03 GMT
	I0811 00:54:03.688493 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:04.186230 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:04.186251 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:04.186257 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:04.186262 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:04.188256 1439337 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0811 00:54:04.188270 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:04.188275 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:04.188279 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:04.188282 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:04.188286 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:04.188289 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:04 GMT
	I0811 00:54:04.188415 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:04.685280 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:04.685305 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:04.685312 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:04.685317 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:04.688026 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:04.688077 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:04.688104 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:04.688119 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:04 GMT
	I0811 00:54:04.688132 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:04.688146 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:04.688167 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:04.688531 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:05.186172 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:05.186193 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:05.186199 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:05.186204 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:05.188344 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:05.188376 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:05.188382 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:05.188386 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:05.188390 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:05.188393 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:05.188397 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:05 GMT
	I0811 00:54:05.188539 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:05.188827 1439337 node_ready.go:58] node "multinode-20210811005307-1387367" has status "Ready":"False"
	I0811 00:54:05.686189 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:05.686210 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:05.686216 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:05.686221 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:05.688315 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:05.688393 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:05.688406 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:05.688411 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:05.688414 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:05.688419 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:05.688423 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:05 GMT
	I0811 00:54:05.688559 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:06.186102 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:06.186130 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:06.186137 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:06.186142 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:06.188328 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:06.188380 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:06.188397 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:06.188411 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:06.188425 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:06.188450 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:06.188468 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:06 GMT
	I0811 00:54:06.188621 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:06.686119 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:06.686150 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:06.686156 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:06.686161 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:06.688594 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:06.688641 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:06.688647 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:06 GMT
	I0811 00:54:06.688651 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:06.688655 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:06.688659 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:06.688662 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:06.688773 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:07.185540 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:07.185566 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:07.185573 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:07.185577 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:07.187644 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:07.187666 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:07.187671 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:07.187675 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:07.187679 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:07.187682 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:07.187685 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:07 GMT
	I0811 00:54:07.187801 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:07.685878 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:07.685912 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:07.685919 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:07.685924 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:07.688573 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:07.688619 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:07.688635 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:07.688652 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:07 GMT
	I0811 00:54:07.688666 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:07.688678 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:07.688706 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:07.688850 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:07.689144 1439337 node_ready.go:58] node "multinode-20210811005307-1387367" has status "Ready":"False"
	I0811 00:54:08.185304 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:08.185331 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:08.185338 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:08.185343 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:08.187419 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:08.187435 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:08.187440 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:08.187444 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:08.187447 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:08.187451 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:08.187457 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:08 GMT
	I0811 00:54:08.187605 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:08.685995 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:08.686024 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:08.686031 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:08.686037 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:08.688643 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:08.688661 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:08.688667 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:08 GMT
	I0811 00:54:08.688671 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:08.688674 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:08.688678 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:08.688681 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:08.688789 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:09.185573 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:09.185602 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:09.185609 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:09.185614 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:09.187695 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:09.187712 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:09.187717 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:09.187721 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:09.187725 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:09.187728 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:09.187732 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:09 GMT
	I0811 00:54:09.187895 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:09.686209 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:09.686236 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:09.686243 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:09.686250 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:09.688814 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:09.688831 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:09.688836 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:09.688840 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:09.688844 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:09.688847 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:09.688851 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:09 GMT
	I0811 00:54:09.688996 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:09.689299 1439337 node_ready.go:58] node "multinode-20210811005307-1387367" has status "Ready":"False"
	I0811 00:54:10.185268 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:10.185295 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:10.185302 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:10.185309 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:10.187416 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:10.187437 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:10.187442 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:10.187446 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:10.187450 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:10 GMT
	I0811 00:54:10.187453 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:10.187457 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:10.187603 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:10.685206 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:10.685237 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:10.685243 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:10.685248 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:10.687795 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:10.687811 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:10.687817 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:10.687820 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:10.687824 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:10.687828 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:10.687832 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:10 GMT
	I0811 00:54:10.688006 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:11.186048 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:11.186078 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:11.186084 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:11.186089 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:11.188176 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:11.188192 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:11.188198 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:11.188202 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:11.188205 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:11.188209 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:11.188212 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:11 GMT
	I0811 00:54:11.188329 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:11.686197 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:11.686228 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:11.686234 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:11.686239 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:11.688800 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:11.688817 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:11.688822 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:11.688828 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:11.688831 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:11.688835 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:11.688838 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:11 GMT
	I0811 00:54:11.688964 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:12.185849 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:12.185876 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:12.185882 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:12.185887 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:12.187878 1439337 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0811 00:54:12.187895 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:12.187900 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:12.187904 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:12.187907 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:12.187911 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:12 GMT
	I0811 00:54:12.187915 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:12.188039 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:12.188306 1439337 node_ready.go:58] node "multinode-20210811005307-1387367" has status "Ready":"False"
	I0811 00:54:12.686007 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:12.686035 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:12.686041 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:12.686046 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:12.688616 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:12.688632 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:12.688637 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:12.688641 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:12.688644 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:12.688648 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:12.688652 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:12 GMT
	I0811 00:54:12.688760 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:13.185263 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:13.185292 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:13.185298 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:13.185303 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:13.187308 1439337 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0811 00:54:13.187323 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:13.187328 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:13.187332 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:13.187335 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:13.187339 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:13 GMT
	I0811 00:54:13.187342 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:13.187459 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:13.686227 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:13.686260 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:13.686266 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:13.686271 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:13.688906 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:13.688923 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:13.688928 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:13.688933 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:13.688937 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:13.688940 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:13 GMT
	I0811 00:54:13.688944 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:13.689086 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:14.186050 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:14.186081 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:14.186087 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:14.186092 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:14.188076 1439337 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0811 00:54:14.188096 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:14.188101 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:14.188105 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:14.188108 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:14 GMT
	I0811 00:54:14.188111 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:14.188114 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:14.188217 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:14.188485 1439337 node_ready.go:58] node "multinode-20210811005307-1387367" has status "Ready":"False"
	I0811 00:54:14.686231 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:14.686260 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:14.686267 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:14.686272 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:14.688989 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:14.689006 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:14.689027 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:14.689031 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:14.689034 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:14.689037 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:14.689041 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:14 GMT
	I0811 00:54:14.689225 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:15.185896 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:15.185925 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:15.185931 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:15.185936 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:15.187997 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:15.188018 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:15.188023 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:15.188027 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:15.188031 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:15.188034 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:15.188038 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:15 GMT
	I0811 00:54:15.188140 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:15.686119 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:15.686151 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:15.686158 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:15.686163 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:15.687819 1439337 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0811 00:54:15.687834 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:15.687839 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:15.687843 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:15.687847 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:15 GMT
	I0811 00:54:15.687850 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:15.687854 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:15.687968 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:16.185920 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:16.185950 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:16.185956 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:16.185962 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:16.187928 1439337 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0811 00:54:16.187946 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:16.187951 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:16.187955 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:16.187960 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:16.187964 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:16.187967 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:16 GMT
	I0811 00:54:16.188140 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:16.685275 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:16.685310 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:16.685316 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:16.685321 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:16.687618 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:16.687633 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:16.687638 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:16.687642 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:16.687646 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:16.687650 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:16.687653 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:16 GMT
	I0811 00:54:16.687768 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:16.688048 1439337 node_ready.go:58] node "multinode-20210811005307-1387367" has status "Ready":"False"
	I0811 00:54:17.186023 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:17.186048 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:17.186054 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:17.186059 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:17.188141 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:17.188156 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:17.188162 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:17.188165 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:17.188169 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:17.188173 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:17.188177 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:17 GMT
	I0811 00:54:17.188315 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:17.685962 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:17.685991 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:17.685998 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:17.686003 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:17.688663 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:17.688680 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:17.688685 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:17.688689 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:17.688693 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:17.688697 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:17 GMT
	I0811 00:54:17.688700 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:17.688863 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:18.185695 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:18.185724 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:18.185730 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:18.185735 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:18.187768 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:18.187784 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:18.187789 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:18.187792 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:18.187796 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:18.187799 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:18 GMT
	I0811 00:54:18.187803 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:18.187928 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:18.685928 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:18.685960 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:18.685967 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:18.685972 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:18.688580 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:18.688600 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:18.688605 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:18.688609 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:18.688612 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:18.688616 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:18.688620 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:18 GMT
	I0811 00:54:18.688912 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:18.689210 1439337 node_ready.go:58] node "multinode-20210811005307-1387367" has status "Ready":"False"
	I0811 00:54:19.186204 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:19.186231 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:19.186237 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:19.186242 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:19.188215 1439337 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0811 00:54:19.188234 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:19.188238 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:19.188242 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:19 GMT
	I0811 00:54:19.188246 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:19.188251 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:19.188255 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:19.188360 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:19.686225 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:19.686256 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:19.686263 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:19.686268 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:19.688859 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:19.688877 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:19.688882 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:19.688885 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:19.688889 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:19 GMT
	I0811 00:54:19.688893 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:19.688896 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:19.689060 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:20.186217 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:20.186244 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:20.186250 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:20.186255 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:20.188291 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:20.188308 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:20.188313 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:20.188319 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:20 GMT
	I0811 00:54:20.188322 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:20.188326 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:20.188329 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:20.188470 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:20.686250 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:20.686280 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:20.686287 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:20.686292 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:20.689061 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:20.689078 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:20.689084 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:20.689088 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:20 GMT
	I0811 00:54:20.689092 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:20.689095 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:20.689098 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:20.689210 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:20.689466 1439337 node_ready.go:58] node "multinode-20210811005307-1387367" has status "Ready":"False"
	I0811 00:54:21.186161 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:21.186188 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:21.186194 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:21.186199 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:21.188184 1439337 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0811 00:54:21.188199 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:21.188204 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:21.188208 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:21.188211 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:21.188215 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:21.188218 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:21 GMT
	I0811 00:54:21.188339 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:21.686234 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:21.686260 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:21.686266 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:21.686270 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:21.688494 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:21.688515 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:21.688521 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:21.688525 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:21 GMT
	I0811 00:54:21.688530 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:21.688534 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:21.688537 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:21.688644 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"498","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5272 chars]
	I0811 00:54:21.688903 1439337 node_ready.go:49] node "multinode-20210811005307-1387367" has status "Ready":"True"
	I0811 00:54:21.688919 1439337 node_ready.go:38] duration metric: took 20.507776617s waiting for node "multinode-20210811005307-1387367" to be "Ready" ...
	I0811 00:54:21.688929 1439337 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0811 00:54:21.689043 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0811 00:54:21.689058 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:21.689063 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:21.689069 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:21.691889 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:21.691954 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:21.691964 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:21.691968 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:21 GMT
	I0811 00:54:21.691972 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:21.691982 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:21.691988 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:21.692574 1439337 request.go:1123] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"499"},"items":[{"metadata":{"name":"coredns-558bd4d5db-lpxc6","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"839d8a5e-9cef-4c9e-a07f-db7f529aaa6a","resourceVersion":"449","creationTimestamp":"2021-08-11T00:54:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"d2f4d58c-c53b-4251-9422-50d7bb166f11","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d2f4d58c-c53b-4251-9422-50d7bb166f11\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller"
:{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k: [truncated 52501 chars]
	I0811 00:54:21.699616 1439337 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-lpxc6" in "kube-system" namespace to be "Ready" ...
	I0811 00:54:21.699699 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-lpxc6
	I0811 00:54:21.699712 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:21.699720 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:21.699730 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:21.701575 1439337 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0811 00:54:21.701614 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:21.701630 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:21.701651 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:21.701655 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:21.701658 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:21 GMT
	I0811 00:54:21.701662 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:21.701759 1439337 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-lpxc6","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"839d8a5e-9cef-4c9e-a07f-db7f529aaa6a","resourceVersion":"449","creationTimestamp":"2021-08-11T00:54:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"d2f4d58c-c53b-4251-9422-50d7bb166f11","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d2f4d58c-c53b-4251-9422-50d7bb166f11\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 4686 chars]
	I0811 00:54:22.208179 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-lpxc6
	I0811 00:54:22.208208 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:22.208214 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:22.208218 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:22.210474 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:22.210523 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:22.210539 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:22.210552 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:22.210566 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:22.210579 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:22.210603 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:22 GMT
	I0811 00:54:22.210725 1439337 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-lpxc6","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"839d8a5e-9cef-4c9e-a07f-db7f529aaa6a","resourceVersion":"449","creationTimestamp":"2021-08-11T00:54:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"d2f4d58c-c53b-4251-9422-50d7bb166f11","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d2f4d58c-c53b-4251-9422-50d7bb166f11\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 4686 chars]
	I0811 00:54:22.708569 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-lpxc6
	I0811 00:54:22.708600 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:22.708606 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:22.708611 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:22.711363 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:22.711422 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:22.711435 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:22.711440 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:22.711443 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:22.711447 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:22.711450 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:22 GMT
	I0811 00:54:22.711577 1439337 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-lpxc6","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"839d8a5e-9cef-4c9e-a07f-db7f529aaa6a","resourceVersion":"449","creationTimestamp":"2021-08-11T00:54:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"d2f4d58c-c53b-4251-9422-50d7bb166f11","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d2f4d58c-c53b-4251-9422-50d7bb166f11\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 4686 chars]
	I0811 00:54:23.208184 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-lpxc6
	I0811 00:54:23.208211 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:23.208217 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:23.208222 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:23.210427 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:23.210449 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:23.210454 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:23.210457 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:23.210461 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:23.210464 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:23.210470 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:23 GMT
	I0811 00:54:23.210587 1439337 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-lpxc6","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"839d8a5e-9cef-4c9e-a07f-db7f529aaa6a","resourceVersion":"449","creationTimestamp":"2021-08-11T00:54:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"d2f4d58c-c53b-4251-9422-50d7bb166f11","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d2f4d58c-c53b-4251-9422-50d7bb166f11\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 4686 chars]
	I0811 00:54:23.708681 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-lpxc6
	I0811 00:54:23.708711 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:23.708719 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:23.708724 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:23.711409 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:23.711466 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:23.711483 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:23.711497 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:23.711510 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:23.711523 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:23.711555 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:23 GMT
	I0811 00:54:23.711706 1439337 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-lpxc6","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"839d8a5e-9cef-4c9e-a07f-db7f529aaa6a","resourceVersion":"449","creationTimestamp":"2021-08-11T00:54:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"d2f4d58c-c53b-4251-9422-50d7bb166f11","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d2f4d58c-c53b-4251-9422-50d7bb166f11\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 4686 chars]
	I0811 00:54:23.712060 1439337 pod_ready.go:102] pod "coredns-558bd4d5db-lpxc6" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-08-11 00:54:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0811 00:54:24.207851 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-lpxc6
	I0811 00:54:24.207877 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:24.207883 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:24.207888 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:24.210090 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:24.210136 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:24.210149 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:24.210153 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:24.210156 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:24 GMT
	I0811 00:54:24.210172 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:24.210176 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:24.210291 1439337 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-lpxc6","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"839d8a5e-9cef-4c9e-a07f-db7f529aaa6a","resourceVersion":"449","creationTimestamp":"2021-08-11T00:54:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"d2f4d58c-c53b-4251-9422-50d7bb166f11","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d2f4d58c-c53b-4251-9422-50d7bb166f11\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 4686 chars]
	I0811 00:54:24.708758 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-lpxc6
	I0811 00:54:24.708787 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:24.708794 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:24.708798 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:24.711466 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:24.711511 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:24.711528 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:24.711542 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:24 GMT
	I0811 00:54:24.711585 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:24.711607 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:24.711621 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:24.711741 1439337 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-lpxc6","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"839d8a5e-9cef-4c9e-a07f-db7f529aaa6a","resourceVersion":"449","creationTimestamp":"2021-08-11T00:54:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"d2f4d58c-c53b-4251-9422-50d7bb166f11","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d2f4d58c-c53b-4251-9422-50d7bb166f11\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 4686 chars]
	I0811 00:54:25.208776 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-lpxc6
	I0811 00:54:25.208805 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:25.208812 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:25.208817 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:25.211061 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:25.211080 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:25.211085 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:25.211089 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:25.211111 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:25.211115 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:25.211120 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:25 GMT
	I0811 00:54:25.211238 1439337 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-lpxc6","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"839d8a5e-9cef-4c9e-a07f-db7f529aaa6a","resourceVersion":"449","creationTimestamp":"2021-08-11T00:54:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"d2f4d58c-c53b-4251-9422-50d7bb166f11","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d2f4d58c-c53b-4251-9422-50d7bb166f11\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 4686 chars]
	I0811 00:54:25.708828 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-lpxc6
	I0811 00:54:25.708859 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:25.708865 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:25.708870 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:25.710933 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:25.710979 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:25.710996 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:25.711010 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:25 GMT
	I0811 00:54:25.711023 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:25.711036 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:25.711058 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:25.711233 1439337 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-lpxc6","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"839d8a5e-9cef-4c9e-a07f-db7f529aaa6a","resourceVersion":"449","creationTimestamp":"2021-08-11T00:54:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"d2f4d58c-c53b-4251-9422-50d7bb166f11","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d2f4d58c-c53b-4251-9422-50d7bb166f11\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 4686 chars]
	I0811 00:54:26.207820 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-lpxc6
	I0811 00:54:26.207851 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:26.207857 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:26.207862 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:26.210007 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:26.210025 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:26.210031 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:26.210034 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:26.210038 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:26.210041 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:26.210045 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:26 GMT
	I0811 00:54:26.210158 1439337 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-lpxc6","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"839d8a5e-9cef-4c9e-a07f-db7f529aaa6a","resourceVersion":"449","creationTimestamp":"2021-08-11T00:54:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"d2f4d58c-c53b-4251-9422-50d7bb166f11","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d2f4d58c-c53b-4251-9422-50d7bb166f11\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 4686 chars]
	I0811 00:54:26.210521 1439337 pod_ready.go:102] pod "coredns-558bd4d5db-lpxc6" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-08-11 00:54:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0811 00:54:26.708200 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-lpxc6
	I0811 00:54:26.708225 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:26.708231 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:26.708237 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:26.710822 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:26.710844 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:26.710849 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:26.710853 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:26.710856 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:26.710862 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:26.710866 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:26 GMT
	I0811 00:54:26.711099 1439337 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-lpxc6","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"839d8a5e-9cef-4c9e-a07f-db7f529aaa6a","resourceVersion":"506","creationTimestamp":"2021-08-11T00:54:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"d2f4d58c-c53b-4251-9422-50d7bb166f11","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d2f4d58c-c53b-4251-9422-50d7bb166f11\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5944 chars]
	I0811 00:54:26.711513 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:26.711532 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:26.711537 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:26.711543 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:26.713286 1439337 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0811 00:54:26.713302 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:26.713306 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:26.713310 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:26.713314 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:26.713317 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:26.713321 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:26 GMT
	I0811 00:54:26.713822 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"498","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5272 chars]
	I0811 00:54:27.208086 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-lpxc6
	I0811 00:54:27.208119 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:27.208125 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:27.208130 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:27.214523 1439337 round_trippers.go:457] Response Status: 200 OK in 6 milliseconds
	I0811 00:54:27.214544 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:27.214549 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:27 GMT
	I0811 00:54:27.214553 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:27.214557 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:27.214560 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:27.214564 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:27.214752 1439337 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-lpxc6","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"839d8a5e-9cef-4c9e-a07f-db7f529aaa6a","resourceVersion":"506","creationTimestamp":"2021-08-11T00:54:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"d2f4d58c-c53b-4251-9422-50d7bb166f11","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d2f4d58c-c53b-4251-9422-50d7bb166f11\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5944 chars]
	I0811 00:54:27.215175 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:27.215197 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:27.215203 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:27.215208 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:27.217286 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:27.217303 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:27.217309 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:27.217313 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:27 GMT
	I0811 00:54:27.217316 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:27.217320 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:27.217324 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:27.217615 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"498","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5272 chars]
	I0811 00:54:27.708216 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-lpxc6
	I0811 00:54:27.708247 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:27.708256 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:27.708261 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:27.711039 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:27.711061 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:27.711067 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:27.711070 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:27.711075 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:27.711081 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:27.711085 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:27 GMT
	I0811 00:54:27.711278 1439337 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-lpxc6","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"839d8a5e-9cef-4c9e-a07f-db7f529aaa6a","resourceVersion":"518","creationTimestamp":"2021-08-11T00:54:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"d2f4d58c-c53b-4251-9422-50d7bb166f11","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d2f4d58c-c53b-4251-9422-50d7bb166f11\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 6071 chars]
	I0811 00:54:27.711669 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:27.711686 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:27.711692 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:27.711696 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:27.713719 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:27.713754 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:27.713759 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:27.713764 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:27.713767 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:27.713772 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:27 GMT
	I0811 00:54:27.713786 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:27.713899 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"498","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5272 chars]
	I0811 00:54:27.714183 1439337 pod_ready.go:92] pod "coredns-558bd4d5db-lpxc6" in "kube-system" namespace has status "Ready":"True"
	I0811 00:54:27.714209 1439337 pod_ready.go:81] duration metric: took 6.01455566s waiting for pod "coredns-558bd4d5db-lpxc6" in "kube-system" namespace to be "Ready" ...
	I0811 00:54:27.714225 1439337 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-20210811005307-1387367" in "kube-system" namespace to be "Ready" ...
	I0811 00:54:27.714283 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210811005307-1387367
	I0811 00:54:27.714294 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:27.714300 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:27.714304 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:27.716083 1439337 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0811 00:54:27.716099 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:27.716104 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:27.716108 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:27.716111 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:27.716115 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:27.716118 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:27 GMT
	I0811 00:54:27.716248 1439337 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210811005307-1387367","namespace":"kube-system","uid":"b98555c3-d9ce-452c-a2de-7ee50a50311d","resourceVersion":"459","creationTimestamp":"2021-08-11T00:53:51Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"70ae736662f600440da0a55cde86b0f8","kubernetes.io/config.mirror":"70ae736662f600440da0a55cde86b0f8","kubernetes.io/config.seen":"2021-08-11T00:53:47.643869676Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5588 chars]
	I0811 00:54:27.716542 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:27.716557 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:27.716562 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:27.716567 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:27.718079 1439337 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0811 00:54:27.718095 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:27.718100 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:27.718103 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:27.718108 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:27.718111 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:27 GMT
	I0811 00:54:27.718116 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:27.718371 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"498","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5272 chars]
	I0811 00:54:27.718616 1439337 pod_ready.go:92] pod "etcd-multinode-20210811005307-1387367" in "kube-system" namespace has status "Ready":"True"
	I0811 00:54:27.718632 1439337 pod_ready.go:81] duration metric: took 4.395624ms waiting for pod "etcd-multinode-20210811005307-1387367" in "kube-system" namespace to be "Ready" ...
	I0811 00:54:27.718648 1439337 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-20210811005307-1387367" in "kube-system" namespace to be "Ready" ...
	I0811 00:54:27.718696 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20210811005307-1387367
	I0811 00:54:27.718707 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:27.718712 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:27.718719 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:27.720419 1439337 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0811 00:54:27.720436 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:27.720441 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:27 GMT
	I0811 00:54:27.720445 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:27.720448 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:27.720451 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:27.720455 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:27.720608 1439337 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20210811005307-1387367","namespace":"kube-system","uid":"520b1e32-479d-4e0e-8867-276c958ae125","resourceVersion":"460","creationTimestamp":"2021-08-11T00:53:45Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8443","kubernetes.io/config.hash":"74969952953b6d01bc2817560a3e688d","kubernetes.io/config.mirror":"74969952953b6d01bc2817560a3e688d","kubernetes.io/config.seen":"2021-08-11T00:53:31.835501949Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-addr [truncated 8113 chars]
	I0811 00:54:27.720983 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:27.720995 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:27.721001 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:27.721034 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:27.722659 1439337 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0811 00:54:27.722699 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:27.722715 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:27.722730 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:27.722743 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:27 GMT
	I0811 00:54:27.722757 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:27.722782 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:27.722909 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"498","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5272 chars]
	I0811 00:54:27.723192 1439337 pod_ready.go:92] pod "kube-apiserver-multinode-20210811005307-1387367" in "kube-system" namespace has status "Ready":"True"
	I0811 00:54:27.723210 1439337 pod_ready.go:81] duration metric: took 4.552898ms waiting for pod "kube-apiserver-multinode-20210811005307-1387367" in "kube-system" namespace to be "Ready" ...
	I0811 00:54:27.723222 1439337 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-20210811005307-1387367" in "kube-system" namespace to be "Ready" ...
	I0811 00:54:27.723279 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20210811005307-1387367
	I0811 00:54:27.723291 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:27.723296 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:27.723302 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:27.725128 1439337 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0811 00:54:27.725150 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:27.725155 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:27.725158 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:27.725162 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:27.725176 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:27.725182 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:27 GMT
	I0811 00:54:27.725281 1439337 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20210811005307-1387367","namespace":"kube-system","uid":"f0ca8783-2ede-4c80-adc7-94aa58a85ad1","resourceVersion":"462","creationTimestamp":"2021-08-11T00:53:45Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"cfbf57d2192b91a488c5172bd9546eeb","kubernetes.io/config.mirror":"cfbf57d2192b91a488c5172bd9546eeb","kubernetes.io/config.seen":"2021-08-11T00:53:31.835503352Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/c
onfig.mirror":{},"f:kubernetes.io/config.seen":{},"f:kubernetes.io/conf [truncated 7679 chars]
	I0811 00:54:27.725659 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:27.725675 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:27.725681 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:27.725685 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:27.727299 1439337 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0811 00:54:27.727315 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:27.727320 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:27.727323 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:27.727326 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:27.727330 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:27.727333 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:27 GMT
	I0811 00:54:27.727584 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"498","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5272 chars]
	I0811 00:54:27.727836 1439337 pod_ready.go:92] pod "kube-controller-manager-multinode-20210811005307-1387367" in "kube-system" namespace has status "Ready":"True"
	I0811 00:54:27.727852 1439337 pod_ready.go:81] duration metric: took 4.621666ms waiting for pod "kube-controller-manager-multinode-20210811005307-1387367" in "kube-system" namespace to be "Ready" ...
	I0811 00:54:27.727863 1439337 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sjx8s" in "kube-system" namespace to be "Ready" ...
	I0811 00:54:27.727915 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sjx8s
	I0811 00:54:27.727926 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:27.727930 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:27.727935 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:27.729698 1439337 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0811 00:54:27.729726 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:27.729731 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:27.729737 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:27.729751 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:27.729762 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:27.729766 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:27 GMT
	I0811 00:54:27.730096 1439337 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-sjx8s","generateName":"kube-proxy-","namespace":"kube-system","uid":"b7a97e6a-09fd-4f56-9ee7-9ebd40c689f7","resourceVersion":"482","creationTimestamp":"2021-08-11T00:54:00Z","labels":{"controller-revision-hash":"7cdcb64568","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"37aa45af-7498-4003-abc1-af1fe65a80b1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37aa45af-7498-4003-abc1-af1fe65a80b1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller
":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:affinity":{".": [truncated 5777 chars]
	I0811 00:54:27.730435 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:27.730453 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:27.730459 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:27.730464 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:27.732310 1439337 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0811 00:54:27.732348 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:27.732365 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:27.732392 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:27.732412 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:27.732428 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:27.732441 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:27 GMT
	I0811 00:54:27.732576 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"498","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5272 chars]
	I0811 00:54:27.732838 1439337 pod_ready.go:92] pod "kube-proxy-sjx8s" in "kube-system" namespace has status "Ready":"True"
	I0811 00:54:27.732851 1439337 pod_ready.go:81] duration metric: took 4.977569ms waiting for pod "kube-proxy-sjx8s" in "kube-system" namespace to be "Ready" ...
	I0811 00:54:27.732861 1439337 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-20210811005307-1387367" in "kube-system" namespace to be "Ready" ...
	I0811 00:54:27.909224 1439337 request.go:600] Waited for 176.303519ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20210811005307-1387367
	I0811 00:54:27.909312 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20210811005307-1387367
	I0811 00:54:27.909362 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:27.909376 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:27.909381 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:27.911653 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:27.911700 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:27.911716 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:27.911731 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:27.911745 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:27.911759 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:27.911782 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:27 GMT
	I0811 00:54:27.912027 1439337 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-20210811005307-1387367","namespace":"kube-system","uid":"7a24d14d-4566-4ab3-a237-634064615837","resourceVersion":"476","creationTimestamp":"2021-08-11T00:53:51Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"215965f927d1bdc023cfbcf159bba72a","kubernetes.io/config.mirror":"215965f927d1bdc023cfbcf159bba72a","kubernetes.io/config.seen":"2021-08-11T00:53:47.643889688Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"
f:kubernetes.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f: [truncated 4561 chars]
	I0811 00:54:28.108661 1439337 request.go:600] Waited for 196.314258ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:28.108727 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:28.108736 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:28.108742 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:28.108749 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:28.111360 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:28.111379 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:28.111384 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:28.111388 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:28.111435 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:28.111445 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:28.111448 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:28 GMT
	I0811 00:54:28.111532 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"498","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5272 chars]
	I0811 00:54:28.111814 1439337 pod_ready.go:92] pod "kube-scheduler-multinode-20210811005307-1387367" in "kube-system" namespace has status "Ready":"True"
	I0811 00:54:28.111828 1439337 pod_ready.go:81] duration metric: took 378.95647ms waiting for pod "kube-scheduler-multinode-20210811005307-1387367" in "kube-system" namespace to be "Ready" ...
	I0811 00:54:28.111840 1439337 pod_ready.go:38] duration metric: took 6.422878791s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0811 00:54:28.111860 1439337 api_server.go:50] waiting for apiserver process to appear ...
	I0811 00:54:28.111912 1439337 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0811 00:54:28.126199 1439337 command_runner.go:124] > 1962
	I0811 00:54:28.126233 1439337 api_server.go:70] duration metric: took 27.214733619s to wait for apiserver process to appear ...
	I0811 00:54:28.126241 1439337 api_server.go:86] waiting for apiserver healthz status ...
	I0811 00:54:28.126267 1439337 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0811 00:54:28.134963 1439337 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0811 00:54:28.135056 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/version?timeout=32s
	I0811 00:54:28.135067 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:28.135072 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:28.135089 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:28.135850 1439337 round_trippers.go:457] Response Status: 200 OK in 0 milliseconds
	I0811 00:54:28.135866 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:28.135871 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:28.135875 1439337 round_trippers.go:463]     Content-Length: 263
	I0811 00:54:28.135878 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:28 GMT
	I0811 00:54:28.135881 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:28.135885 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:28.135896 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:28.135925 1439337 request.go:1123] Response Body: {
	  "major": "1",
	  "minor": "21",
	  "gitVersion": "v1.21.3",
	  "gitCommit": "ca643a4d1f7bfe34773c74f79527be4afd95bf39",
	  "gitTreeState": "clean",
	  "buildDate": "2021-07-15T20:59:07Z",
	  "goVersion": "go1.16.6",
	  "compiler": "gc",
	  "platform": "linux/arm64"
	}
	I0811 00:54:28.136017 1439337 api_server.go:139] control plane version: v1.21.3
	I0811 00:54:28.136032 1439337 api_server.go:129] duration metric: took 9.786549ms to wait for apiserver health ...
	I0811 00:54:28.136039 1439337 system_pods.go:43] waiting for kube-system pods to appear ...
	I0811 00:54:28.308286 1439337 request.go:600] Waited for 172.183028ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0811 00:54:28.308385 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0811 00:54:28.308401 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:28.308431 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:28.308445 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:28.311908 1439337 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0811 00:54:28.311975 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:28.311991 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:28.312037 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:28.312057 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:28.312072 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:28.312117 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:28 GMT
	I0811 00:54:28.312640 1439337 request.go:1123] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"523"},"items":[{"metadata":{"name":"coredns-558bd4d5db-lpxc6","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"839d8a5e-9cef-4c9e-a07f-db7f529aaa6a","resourceVersion":"518","creationTimestamp":"2021-08-11T00:54:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"d2f4d58c-c53b-4251-9422-50d7bb166f11","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d2f4d58c-c53b-4251-9422-50d7bb166f11\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller"
:{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k: [truncated 55311 chars]
	I0811 00:54:28.314241 1439337 system_pods.go:59] 8 kube-system pods found
	I0811 00:54:28.314277 1439337 system_pods.go:61] "coredns-558bd4d5db-lpxc6" [839d8a5e-9cef-4c9e-a07f-db7f529aaa6a] Running
	I0811 00:54:28.314286 1439337 system_pods.go:61] "etcd-multinode-20210811005307-1387367" [b98555c3-d9ce-452c-a2de-7ee50a50311d] Running
	I0811 00:54:28.314294 1439337 system_pods.go:61] "kindnet-xqj59" [5b61604f-90bf-41cc-9637-18fe68a7551c] Running
	I0811 00:54:28.314300 1439337 system_pods.go:61] "kube-apiserver-multinode-20210811005307-1387367" [520b1e32-479d-4e0e-8867-276c958ae125] Running
	I0811 00:54:28.314305 1439337 system_pods.go:61] "kube-controller-manager-multinode-20210811005307-1387367" [f0ca8783-2ede-4c80-adc7-94aa58a85ad1] Running
	I0811 00:54:28.314317 1439337 system_pods.go:61] "kube-proxy-sjx8s" [b7a97e6a-09fd-4f56-9ee7-9ebd40c689f7] Running
	I0811 00:54:28.314322 1439337 system_pods.go:61] "kube-scheduler-multinode-20210811005307-1387367" [7a24d14d-4566-4ab3-a237-634064615837] Running
	I0811 00:54:28.314327 1439337 system_pods.go:61] "storage-provisioner" [3e157891-c819-4c4a-8e4d-da074ce5a161] Running
	I0811 00:54:28.314334 1439337 system_pods.go:74] duration metric: took 178.290592ms to wait for pod list to return data ...
	I0811 00:54:28.314346 1439337 default_sa.go:34] waiting for default service account to be created ...
	I0811 00:54:28.508685 1439337 request.go:600] Waited for 194.27185ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0811 00:54:28.508769 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0811 00:54:28.508820 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:28.508833 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:28.508839 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:28.511273 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:28.511298 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:28.511303 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:28.511307 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:28.511310 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:28.511314 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:28.511317 1439337 round_trippers.go:463]     Content-Length: 304
	I0811 00:54:28.511320 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:28 GMT
	I0811 00:54:28.511340 1439337 request.go:1123] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"523"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"5d93c483-d14d-4998-a058-1bf4f42a56a6","resourceVersion":"405","creationTimestamp":"2021-08-11T00:54:00Z"},"secrets":[{"name":"default-token-zjkv2"}]}]}
	I0811 00:54:28.512160 1439337 default_sa.go:45] found service account: "default"
	I0811 00:54:28.512183 1439337 default_sa.go:55] duration metric: took 197.829129ms for default service account to be created ...
	I0811 00:54:28.512191 1439337 system_pods.go:116] waiting for k8s-apps to be running ...
	I0811 00:54:28.708553 1439337 request.go:600] Waited for 196.2924ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0811 00:54:28.708626 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0811 00:54:28.708640 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:28.708646 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:28.708651 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:28.712159 1439337 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0811 00:54:28.712221 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:28.712239 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:28 GMT
	I0811 00:54:28.712253 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:28.712268 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:28.712296 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:28.712315 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:28.712875 1439337 request.go:1123] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"523"},"items":[{"metadata":{"name":"coredns-558bd4d5db-lpxc6","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"839d8a5e-9cef-4c9e-a07f-db7f529aaa6a","resourceVersion":"518","creationTimestamp":"2021-08-11T00:54:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"d2f4d58c-c53b-4251-9422-50d7bb166f11","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d2f4d58c-c53b-4251-9422-50d7bb166f11\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller"
:{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k: [truncated 55311 chars]
	I0811 00:54:28.714522 1439337 system_pods.go:86] 8 kube-system pods found
	I0811 00:54:28.714545 1439337 system_pods.go:89] "coredns-558bd4d5db-lpxc6" [839d8a5e-9cef-4c9e-a07f-db7f529aaa6a] Running
	I0811 00:54:28.714552 1439337 system_pods.go:89] "etcd-multinode-20210811005307-1387367" [b98555c3-d9ce-452c-a2de-7ee50a50311d] Running
	I0811 00:54:28.714561 1439337 system_pods.go:89] "kindnet-xqj59" [5b61604f-90bf-41cc-9637-18fe68a7551c] Running
	I0811 00:54:28.714567 1439337 system_pods.go:89] "kube-apiserver-multinode-20210811005307-1387367" [520b1e32-479d-4e0e-8867-276c958ae125] Running
	I0811 00:54:28.714581 1439337 system_pods.go:89] "kube-controller-manager-multinode-20210811005307-1387367" [f0ca8783-2ede-4c80-adc7-94aa58a85ad1] Running
	I0811 00:54:28.714587 1439337 system_pods.go:89] "kube-proxy-sjx8s" [b7a97e6a-09fd-4f56-9ee7-9ebd40c689f7] Running
	I0811 00:54:28.714597 1439337 system_pods.go:89] "kube-scheduler-multinode-20210811005307-1387367" [7a24d14d-4566-4ab3-a237-634064615837] Running
	I0811 00:54:28.714602 1439337 system_pods.go:89] "storage-provisioner" [3e157891-c819-4c4a-8e4d-da074ce5a161] Running
	I0811 00:54:28.714613 1439337 system_pods.go:126] duration metric: took 202.414183ms to wait for k8s-apps to be running ...
	I0811 00:54:28.714623 1439337 system_svc.go:44] waiting for kubelet service to be running ....
	I0811 00:54:28.714676 1439337 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0811 00:54:28.724225 1439337 system_svc.go:56] duration metric: took 9.596075ms WaitForService to wait for kubelet.
	I0811 00:54:28.724251 1439337 kubeadm.go:547] duration metric: took 27.812752383s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0811 00:54:28.724296 1439337 node_conditions.go:102] verifying NodePressure condition ...
	I0811 00:54:28.908667 1439337 request.go:600] Waited for 184.299211ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0811 00:54:28.908723 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes
	I0811 00:54:28.908735 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:28.908744 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:28.908749 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:28.911429 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:28.911447 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:28.911452 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:28.911455 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:28.911459 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:28 GMT
	I0811 00:54:28.911463 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:28.911466 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:28.911564 1439337 request.go:1123] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"523"},"items":[{"metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"498","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-
managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","o [truncated 5325 chars]
	I0811 00:54:28.912878 1439337 node_conditions.go:122] node storage ephemeral capacity is 60796312Ki
	I0811 00:54:28.912911 1439337 node_conditions.go:123] node cpu capacity is 2
	I0811 00:54:28.912924 1439337 node_conditions.go:105] duration metric: took 188.622187ms to run NodePressure ...
	I0811 00:54:28.912932 1439337 start.go:231] waiting for startup goroutines ...
	I0811 00:54:28.916118 1439337 out.go:177] 
	I0811 00:54:28.916404 1439337 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/config.json ...
	I0811 00:54:28.918950 1439337 out.go:177] * Starting node multinode-20210811005307-1387367-m02 in cluster multinode-20210811005307-1387367
	I0811 00:54:28.918979 1439337 cache.go:117] Beginning downloading kic base image for docker with docker
	I0811 00:54:28.921857 1439337 out.go:177] * Pulling base image ...
	I0811 00:54:28.921885 1439337 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime docker
	I0811 00:54:28.921897 1439337 cache.go:56] Caching tarball of preloaded images
	I0811 00:54:28.921950 1439337 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	I0811 00:54:28.922196 1439337 preload.go:173] Found /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0811 00:54:28.922224 1439337 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on docker
	I0811 00:54:28.922340 1439337 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/config.json ...
	I0811 00:54:28.973092 1439337 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0811 00:54:28.973120 1439337 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0811 00:54:28.973136 1439337 cache.go:205] Successfully downloaded all kic artifacts
	I0811 00:54:28.973168 1439337 start.go:313] acquiring machines lock for multinode-20210811005307-1387367-m02: {Name:mkd6e705422cef7ce7e260ef11f9e40cbb420b68 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0811 00:54:28.973822 1439337 start.go:317] acquired machines lock for "multinode-20210811005307-1387367-m02" in 627.188µs
	I0811 00:54:28.973860 1439337 start.go:89] Provisioning new machine with config: &{Name:multinode-20210811005307-1387367 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:multinode-20210811005307-1387367 Namespace:default APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.21.3 ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0} &{Name:m02 IP: Port:0 KubernetesVersion:v1.21.3 ControlPlane:false Worker:true}
	I0811 00:54:28.973951 1439337 start.go:126] createHost starting for "m02" (driver="docker")
	I0811 00:54:28.976950 1439337 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0811 00:54:28.977094 1439337 start.go:160] libmachine.API.Create for "multinode-20210811005307-1387367" (driver="docker")
	I0811 00:54:28.977123 1439337 client.go:168] LocalClient.Create starting
	I0811 00:54:28.977191 1439337 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem
	I0811 00:54:28.977225 1439337 main.go:130] libmachine: Decoding PEM data...
	I0811 00:54:28.977245 1439337 main.go:130] libmachine: Parsing certificate...
	I0811 00:54:28.977358 1439337 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem
	I0811 00:54:28.977374 1439337 main.go:130] libmachine: Decoding PEM data...
	I0811 00:54:28.977386 1439337 main.go:130] libmachine: Parsing certificate...
	I0811 00:54:28.977674 1439337 cli_runner.go:115] Run: docker network inspect multinode-20210811005307-1387367 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0811 00:54:29.009692 1439337 network_create.go:67] Found existing network {name:multinode-20210811005307-1387367 subnet:0x40010b5050 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0811 00:54:29.009733 1439337 kic.go:106] calculated static IP "192.168.49.3" for the "multinode-20210811005307-1387367-m02" container
	I0811 00:54:29.009802 1439337 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0811 00:54:29.042081 1439337 cli_runner.go:115] Run: docker volume create multinode-20210811005307-1387367-m02 --label name.minikube.sigs.k8s.io=multinode-20210811005307-1387367-m02 --label created_by.minikube.sigs.k8s.io=true
	I0811 00:54:29.081155 1439337 oci.go:102] Successfully created a docker volume multinode-20210811005307-1387367-m02
	I0811 00:54:29.081242 1439337 cli_runner.go:115] Run: docker run --rm --name multinode-20210811005307-1387367-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-20210811005307-1387367-m02 --entrypoint /usr/bin/test -v multinode-20210811005307-1387367-m02:/var gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -d /var/lib
	I0811 00:54:29.693260 1439337 oci.go:106] Successfully prepared a docker volume multinode-20210811005307-1387367-m02
	W0811 00:54:29.693323 1439337 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0811 00:54:29.693334 1439337 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0811 00:54:29.693399 1439337 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0811 00:54:29.693609 1439337 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime docker
	I0811 00:54:29.693632 1439337 kic.go:179] Starting extracting preloaded images to volume ...
	I0811 00:54:29.693683 1439337 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v multinode-20210811005307-1387367-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir
	I0811 00:54:29.825794 1439337 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-20210811005307-1387367-m02 --name multinode-20210811005307-1387367-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-20210811005307-1387367-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-20210811005307-1387367-m02 --network multinode-20210811005307-1387367 --ip 192.168.49.3 --volume multinode-20210811005307-1387367-m02:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79
	I0811 00:54:30.404178 1439337 cli_runner.go:115] Run: docker container inspect multinode-20210811005307-1387367-m02 --format={{.State.Running}}
	I0811 00:54:30.460317 1439337 cli_runner.go:115] Run: docker container inspect multinode-20210811005307-1387367-m02 --format={{.State.Status}}
	I0811 00:54:30.513780 1439337 cli_runner.go:115] Run: docker exec multinode-20210811005307-1387367-m02 stat /var/lib/dpkg/alternatives/iptables
	I0811 00:54:30.602948 1439337 oci.go:278] the created container "multinode-20210811005307-1387367-m02" has a running status.
	I0811 00:54:30.602980 1439337 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210811005307-1387367-m02/id_rsa...
	I0811 00:54:30.860232 1439337 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210811005307-1387367-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0811 00:54:30.860275 1439337 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210811005307-1387367-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0811 00:54:31.009081 1439337 cli_runner.go:115] Run: docker container inspect multinode-20210811005307-1387367-m02 --format={{.State.Status}}
	I0811 00:54:31.073950 1439337 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0811 00:54:31.073977 1439337 kic_runner.go:115] Args: [docker exec --privileged multinode-20210811005307-1387367-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0811 00:54:39.886611 1439337 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v multinode-20210811005307-1387367-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir: (10.19289047s)
	I0811 00:54:39.886639 1439337 kic.go:188] duration metric: took 10.193004 seconds to extract preloaded images to volume
	I0811 00:54:39.886725 1439337 cli_runner.go:115] Run: docker container inspect multinode-20210811005307-1387367-m02 --format={{.State.Status}}
	I0811 00:54:39.918981 1439337 machine.go:88] provisioning docker machine ...
	I0811 00:54:39.919016 1439337 ubuntu.go:169] provisioning hostname "multinode-20210811005307-1387367-m02"
	I0811 00:54:39.919076 1439337 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210811005307-1387367-m02
	I0811 00:54:39.959754 1439337 main.go:130] libmachine: Using SSH client type: native
	I0811 00:54:39.959932 1439337 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil>  [] 0s} 127.0.0.1 50290 <nil> <nil>}
	I0811 00:54:39.959956 1439337 main.go:130] libmachine: About to run SSH command:
	sudo hostname multinode-20210811005307-1387367-m02 && echo "multinode-20210811005307-1387367-m02" | sudo tee /etc/hostname
	I0811 00:54:40.116900 1439337 main.go:130] libmachine: SSH cmd err, output: <nil>: multinode-20210811005307-1387367-m02
	
	I0811 00:54:40.116976 1439337 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210811005307-1387367-m02
	I0811 00:54:40.155470 1439337 main.go:130] libmachine: Using SSH client type: native
	I0811 00:54:40.155639 1439337 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil>  [] 0s} 127.0.0.1 50290 <nil> <nil>}
	I0811 00:54:40.155661 1439337 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-20210811005307-1387367-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-20210811005307-1387367-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-20210811005307-1387367-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0811 00:54:40.284661 1439337 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0811 00:54:40.284689 1439337 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/k
ey.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube}
	I0811 00:54:40.284706 1439337 ubuntu.go:177] setting up certificates
	I0811 00:54:40.284715 1439337 provision.go:83] configureAuth start
	I0811 00:54:40.284775 1439337 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210811005307-1387367-m02
	I0811 00:54:40.317683 1439337 provision.go:137] copyHostCerts
	I0811 00:54:40.317733 1439337 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem
	I0811 00:54:40.317764 1439337 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem, removing ...
	I0811 00:54:40.317777 1439337 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem
	I0811 00:54:40.317847 1439337 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem (1679 bytes)
	I0811 00:54:40.317922 1439337 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem
	I0811 00:54:40.317945 1439337 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem, removing ...
	I0811 00:54:40.317955 1439337 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem
	I0811 00:54:40.317977 1439337 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem (1082 bytes)
	I0811 00:54:40.318016 1439337 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem
	I0811 00:54:40.318036 1439337 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem, removing ...
	I0811 00:54:40.318046 1439337 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem
	I0811 00:54:40.318066 1439337 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem (1123 bytes)
	I0811 00:54:40.318111 1439337 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem org=jenkins.multinode-20210811005307-1387367-m02 san=[192.168.49.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-20210811005307-1387367-m02]
	I0811 00:54:40.826270 1439337 provision.go:171] copyRemoteCerts
	I0811 00:54:40.826343 1439337 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0811 00:54:40.826387 1439337 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210811005307-1387367-m02
	I0811 00:54:40.859966 1439337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50290 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210811005307-1387367-m02/id_rsa Username:docker}
	I0811 00:54:40.944597 1439337 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0811 00:54:40.944657 1439337 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0811 00:54:40.963217 1439337 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0811 00:54:40.963273 1439337 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem --> /etc/docker/server.pem (1281 bytes)
	I0811 00:54:40.979836 1439337 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0811 00:54:40.979891 1439337 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0811 00:54:40.996513 1439337 provision.go:86] duration metric: configureAuth took 711.780162ms
	I0811 00:54:40.996540 1439337 ubuntu.go:193] setting minikube options for container-runtime
	I0811 00:54:40.996759 1439337 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210811005307-1387367-m02
	I0811 00:54:41.034283 1439337 main.go:130] libmachine: Using SSH client type: native
	I0811 00:54:41.034451 1439337 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil>  [] 0s} 127.0.0.1 50290 <nil> <nil>}
	I0811 00:54:41.034467 1439337 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0811 00:54:41.148976 1439337 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0811 00:54:41.149058 1439337 ubuntu.go:71] root file system type: overlay
	I0811 00:54:41.149284 1439337 provision.go:308] Updating docker unit: /lib/systemd/system/docker.service ...
	I0811 00:54:41.149386 1439337 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210811005307-1387367-m02
	I0811 00:54:41.189427 1439337 main.go:130] libmachine: Using SSH client type: native
	I0811 00:54:41.189596 1439337 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil>  [] 0s} 127.0.0.1 50290 <nil> <nil>}
	I0811 00:54:41.189698 1439337 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.49.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0811 00:54:41.313409 1439337 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.49.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0811 00:54:41.313497 1439337 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210811005307-1387367-m02
	I0811 00:54:41.345873 1439337 main.go:130] libmachine: Using SSH client type: native
	I0811 00:54:41.346041 1439337 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil>  [] 0s} 127.0.0.1 50290 <nil> <nil>}
	I0811 00:54:41.346068 1439337 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0811 00:54:42.360236 1439337 main.go:130] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-06-02 11:55:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2021-08-11 00:54:41.308523488 +0000
	@@ -1,30 +1,33 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	+BindsTo=containerd.service
	 After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+Environment=NO_PROXY=192.168.49.2
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +35,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0811 00:54:42.360299 1439337 machine.go:91] provisioned docker machine in 2.441293199s
	I0811 00:54:42.360321 1439337 client.go:171] LocalClient.Create took 13.383192304s
	I0811 00:54:42.360341 1439337 start.go:168] duration metric: libmachine.API.Create for "multinode-20210811005307-1387367" took 13.383247409s
	I0811 00:54:42.360376 1439337 start.go:267] post-start starting for "multinode-20210811005307-1387367-m02" (driver="docker")
	I0811 00:54:42.360397 1439337 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0811 00:54:42.360477 1439337 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0811 00:54:42.360534 1439337 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210811005307-1387367-m02
	I0811 00:54:42.405936 1439337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50290 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210811005307-1387367-m02/id_rsa Username:docker}
	I0811 00:54:42.492512 1439337 ssh_runner.go:149] Run: cat /etc/os-release
	I0811 00:54:42.495129 1439337 command_runner.go:124] > NAME="Ubuntu"
	I0811 00:54:42.495150 1439337 command_runner.go:124] > VERSION="20.04.2 LTS (Focal Fossa)"
	I0811 00:54:42.495155 1439337 command_runner.go:124] > ID=ubuntu
	I0811 00:54:42.495161 1439337 command_runner.go:124] > ID_LIKE=debian
	I0811 00:54:42.495168 1439337 command_runner.go:124] > PRETTY_NAME="Ubuntu 20.04.2 LTS"
	I0811 00:54:42.495174 1439337 command_runner.go:124] > VERSION_ID="20.04"
	I0811 00:54:42.495182 1439337 command_runner.go:124] > HOME_URL="https://www.ubuntu.com/"
	I0811 00:54:42.495188 1439337 command_runner.go:124] > SUPPORT_URL="https://help.ubuntu.com/"
	I0811 00:54:42.495199 1439337 command_runner.go:124] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0811 00:54:42.495209 1439337 command_runner.go:124] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0811 00:54:42.495217 1439337 command_runner.go:124] > VERSION_CODENAME=focal
	I0811 00:54:42.495223 1439337 command_runner.go:124] > UBUNTU_CODENAME=focal
	I0811 00:54:42.495281 1439337 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0811 00:54:42.495300 1439337 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0811 00:54:42.495312 1439337 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0811 00:54:42.495322 1439337 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0811 00:54:42.495332 1439337 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/addons for local assets ...
	I0811 00:54:42.495387 1439337 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files for local assets ...
	I0811 00:54:42.495470 1439337 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/13873672.pem -> 13873672.pem in /etc/ssl/certs
	I0811 00:54:42.495483 1439337 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/13873672.pem -> /etc/ssl/certs/13873672.pem
	I0811 00:54:42.495574 1439337 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0811 00:54:42.502251 1439337 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/13873672.pem --> /etc/ssl/certs/13873672.pem (1708 bytes)
	I0811 00:54:42.519818 1439337 start.go:270] post-start completed in 159.413663ms
	I0811 00:54:42.520251 1439337 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210811005307-1387367-m02
	I0811 00:54:42.552517 1439337 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/config.json ...
	I0811 00:54:42.552766 1439337 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0811 00:54:42.552817 1439337 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210811005307-1387367-m02
	I0811 00:54:42.584493 1439337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50290 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210811005307-1387367-m02/id_rsa Username:docker}
	I0811 00:54:42.665087 1439337 command_runner.go:124] > 81%!
	(MISSING)I0811 00:54:42.665118 1439337 start.go:129] duration metric: createHost completed in 13.69115856s
	I0811 00:54:42.665127 1439337 start.go:80] releasing machines lock for "multinode-20210811005307-1387367-m02", held for 13.691288217s
	I0811 00:54:42.665209 1439337 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210811005307-1387367-m02
	I0811 00:54:42.701455 1439337 out.go:177] * Found network options:
	I0811 00:54:42.703731 1439337 out.go:177]   - NO_PROXY=192.168.49.2
	W0811 00:54:42.703771 1439337 proxy.go:118] fail to check proxy env: Error ip not in block
	W0811 00:54:42.703803 1439337 proxy.go:118] fail to check proxy env: Error ip not in block
	I0811 00:54:42.703935 1439337 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0811 00:54:42.703960 1439337 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0811 00:54:42.703982 1439337 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210811005307-1387367-m02
	I0811 00:54:42.704027 1439337 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210811005307-1387367-m02
	I0811 00:54:42.750435 1439337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50290 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210811005307-1387367-m02/id_rsa Username:docker}
	I0811 00:54:42.771505 1439337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50290 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210811005307-1387367-m02/id_rsa Username:docker}
	I0811 00:54:43.019136 1439337 command_runner.go:124] > <HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
	I0811 00:54:43.019202 1439337 command_runner.go:124] > <TITLE>302 Moved</TITLE></HEAD><BODY>
	I0811 00:54:43.019221 1439337 command_runner.go:124] > <H1>302 Moved</H1>
	I0811 00:54:43.019239 1439337 command_runner.go:124] > The document has moved
	I0811 00:54:43.019257 1439337 command_runner.go:124] > <A HREF="https://cloud.google.com/container-registry/">here</A>.
	I0811 00:54:43.019290 1439337 command_runner.go:124] > </BODY></HTML>
	I0811 00:54:43.024245 1439337 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0811 00:54:43.034341 1439337 command_runner.go:124] > # /lib/systemd/system/docker.service
	I0811 00:54:43.034361 1439337 command_runner.go:124] > [Unit]
	I0811 00:54:43.034370 1439337 command_runner.go:124] > Description=Docker Application Container Engine
	I0811 00:54:43.034377 1439337 command_runner.go:124] > Documentation=https://docs.docker.com
	I0811 00:54:43.034382 1439337 command_runner.go:124] > BindsTo=containerd.service
	I0811 00:54:43.034391 1439337 command_runner.go:124] > After=network-online.target firewalld.service containerd.service
	I0811 00:54:43.034401 1439337 command_runner.go:124] > Wants=network-online.target
	I0811 00:54:43.034407 1439337 command_runner.go:124] > Requires=docker.socket
	I0811 00:54:43.034415 1439337 command_runner.go:124] > StartLimitBurst=3
	I0811 00:54:43.034420 1439337 command_runner.go:124] > StartLimitIntervalSec=60
	I0811 00:54:43.034427 1439337 command_runner.go:124] > [Service]
	I0811 00:54:43.034432 1439337 command_runner.go:124] > Type=notify
	I0811 00:54:43.034446 1439337 command_runner.go:124] > Restart=on-failure
	I0811 00:54:43.034452 1439337 command_runner.go:124] > Environment=NO_PROXY=192.168.49.2
	I0811 00:54:43.034467 1439337 command_runner.go:124] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0811 00:54:43.034482 1439337 command_runner.go:124] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0811 00:54:43.034492 1439337 command_runner.go:124] > # here is to clear out that command inherited from the base configuration. Without this,
	I0811 00:54:43.034504 1439337 command_runner.go:124] > # the command from the base configuration and the command specified here are treated as
	I0811 00:54:43.034519 1439337 command_runner.go:124] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0811 00:54:43.034529 1439337 command_runner.go:124] > # will catch this invalid input and refuse to start the service with an error like:
	I0811 00:54:43.034543 1439337 command_runner.go:124] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0811 00:54:43.034561 1439337 command_runner.go:124] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0811 00:54:43.034575 1439337 command_runner.go:124] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0811 00:54:43.034579 1439337 command_runner.go:124] > ExecStart=
	I0811 00:54:43.034605 1439337 command_runner.go:124] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0811 00:54:43.034617 1439337 command_runner.go:124] > ExecReload=/bin/kill -s HUP $MAINPID
	I0811 00:54:43.034628 1439337 command_runner.go:124] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0811 00:54:43.034641 1439337 command_runner.go:124] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0811 00:54:43.034650 1439337 command_runner.go:124] > LimitNOFILE=infinity
	I0811 00:54:43.034658 1439337 command_runner.go:124] > LimitNPROC=infinity
	I0811 00:54:43.034663 1439337 command_runner.go:124] > LimitCORE=infinity
	I0811 00:54:43.034671 1439337 command_runner.go:124] > # Uncomment TasksMax if your systemd version supports it.
	I0811 00:54:43.034684 1439337 command_runner.go:124] > # Only systemd 226 and above support this version.
	I0811 00:54:43.034689 1439337 command_runner.go:124] > TasksMax=infinity
	I0811 00:54:43.034696 1439337 command_runner.go:124] > TimeoutStartSec=0
	I0811 00:54:43.034707 1439337 command_runner.go:124] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0811 00:54:43.034716 1439337 command_runner.go:124] > Delegate=yes
	I0811 00:54:43.034724 1439337 command_runner.go:124] > # kill only the docker process, not all processes in the cgroup
	I0811 00:54:43.034733 1439337 command_runner.go:124] > KillMode=process
	I0811 00:54:43.034743 1439337 command_runner.go:124] > [Install]
	I0811 00:54:43.034752 1439337 command_runner.go:124] > WantedBy=multi-user.target
	I0811 00:54:43.034764 1439337 cruntime.go:249] skipping containerd shutdown because we are bound to it
	I0811 00:54:43.034817 1439337 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0811 00:54:43.045479 1439337 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0811 00:54:43.064087 1439337 command_runner.go:124] > runtime-endpoint: unix:///var/run/dockershim.sock
	I0811 00:54:43.064109 1439337 command_runner.go:124] > image-endpoint: unix:///var/run/dockershim.sock
	I0811 00:54:43.065432 1439337 ssh_runner.go:149] Run: sudo systemctl unmask docker.service
	I0811 00:54:43.149282 1439337 ssh_runner.go:149] Run: sudo systemctl enable docker.socket
	I0811 00:54:43.238390 1439337 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0811 00:54:43.246922 1439337 command_runner.go:124] > # /lib/systemd/system/docker.service
	I0811 00:54:43.247684 1439337 command_runner.go:124] > [Unit]
	I0811 00:54:43.247718 1439337 command_runner.go:124] > Description=Docker Application Container Engine
	I0811 00:54:43.247738 1439337 command_runner.go:124] > Documentation=https://docs.docker.com
	I0811 00:54:43.247779 1439337 command_runner.go:124] > BindsTo=containerd.service
	I0811 00:54:43.247797 1439337 command_runner.go:124] > After=network-online.target firewalld.service containerd.service
	I0811 00:54:43.247803 1439337 command_runner.go:124] > Wants=network-online.target
	I0811 00:54:43.247812 1439337 command_runner.go:124] > Requires=docker.socket
	I0811 00:54:43.247825 1439337 command_runner.go:124] > StartLimitBurst=3
	I0811 00:54:43.247836 1439337 command_runner.go:124] > StartLimitIntervalSec=60
	I0811 00:54:43.247841 1439337 command_runner.go:124] > [Service]
	I0811 00:54:43.247851 1439337 command_runner.go:124] > Type=notify
	I0811 00:54:43.247856 1439337 command_runner.go:124] > Restart=on-failure
	I0811 00:54:43.247862 1439337 command_runner.go:124] > Environment=NO_PROXY=192.168.49.2
	I0811 00:54:43.247872 1439337 command_runner.go:124] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0811 00:54:43.247891 1439337 command_runner.go:124] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0811 00:54:43.247909 1439337 command_runner.go:124] > # here is to clear out that command inherited from the base configuration. Without this,
	I0811 00:54:43.247920 1439337 command_runner.go:124] > # the command from the base configuration and the command specified here are treated as
	I0811 00:54:43.247930 1439337 command_runner.go:124] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0811 00:54:43.247944 1439337 command_runner.go:124] > # will catch this invalid input and refuse to start the service with an error like:
	I0811 00:54:43.247954 1439337 command_runner.go:124] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0811 00:54:43.247967 1439337 command_runner.go:124] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0811 00:54:43.247982 1439337 command_runner.go:124] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0811 00:54:43.247986 1439337 command_runner.go:124] > ExecStart=
	I0811 00:54:43.248015 1439337 command_runner.go:124] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0811 00:54:43.248026 1439337 command_runner.go:124] > ExecReload=/bin/kill -s HUP $MAINPID
	I0811 00:54:43.248037 1439337 command_runner.go:124] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0811 00:54:43.248049 1439337 command_runner.go:124] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0811 00:54:43.248057 1439337 command_runner.go:124] > LimitNOFILE=infinity
	I0811 00:54:43.248069 1439337 command_runner.go:124] > LimitNPROC=infinity
	I0811 00:54:43.248074 1439337 command_runner.go:124] > LimitCORE=infinity
	I0811 00:54:43.248082 1439337 command_runner.go:124] > # Uncomment TasksMax if your systemd version supports it.
	I0811 00:54:43.248094 1439337 command_runner.go:124] > # Only systemd 226 and above support this version.
	I0811 00:54:43.248102 1439337 command_runner.go:124] > TasksMax=infinity
	I0811 00:54:43.248107 1439337 command_runner.go:124] > TimeoutStartSec=0
	I0811 00:54:43.248120 1439337 command_runner.go:124] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0811 00:54:43.248125 1439337 command_runner.go:124] > Delegate=yes
	I0811 00:54:43.248133 1439337 command_runner.go:124] > # kill only the docker process, not all processes in the cgroup
	I0811 00:54:43.248145 1439337 command_runner.go:124] > KillMode=process
	I0811 00:54:43.248151 1439337 command_runner.go:124] > [Install]
	I0811 00:54:43.248157 1439337 command_runner.go:124] > WantedBy=multi-user.target
	I0811 00:54:43.249339 1439337 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0811 00:54:43.342921 1439337 ssh_runner.go:149] Run: sudo systemctl start docker
	I0811 00:54:43.352798 1439337 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0811 00:54:43.414498 1439337 command_runner.go:124] > 20.10.7
	I0811 00:54:43.417618 1439337 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0811 00:54:43.468394 1439337 command_runner.go:124] > 20.10.7
	I0811 00:54:43.475858 1439337 out.go:204] * Preparing Kubernetes v1.21.3 on Docker 20.10.7 ...
	I0811 00:54:43.477955 1439337 out.go:177]   - env NO_PROXY=192.168.49.2
	I0811 00:54:43.478028 1439337 cli_runner.go:115] Run: docker network inspect multinode-20210811005307-1387367 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0811 00:54:43.509881 1439337 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0811 00:54:43.513509 1439337 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0811 00:54:43.524526 1439337 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367 for IP: 192.168.49.3
	I0811 00:54:43.524579 1439337 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key
	I0811 00:54:43.524598 1439337 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key
	I0811 00:54:43.524612 1439337 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0811 00:54:43.524625 1439337 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0811 00:54:43.524662 1439337 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0811 00:54:43.524675 1439337 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0811 00:54:43.524727 1439337 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/1387367.pem (1338 bytes)
	W0811 00:54:43.524775 1439337 certs.go:369] ignoring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/1387367_empty.pem, impossibly tiny 0 bytes
	I0811 00:54:43.524791 1439337 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem (1675 bytes)
	I0811 00:54:43.524815 1439337 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem (1082 bytes)
	I0811 00:54:43.524844 1439337 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem (1123 bytes)
	I0811 00:54:43.524870 1439337 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem (1679 bytes)
	I0811 00:54:43.524919 1439337 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/13873672.pem (1708 bytes)
	I0811 00:54:43.524955 1439337 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/13873672.pem -> /usr/share/ca-certificates/13873672.pem
	I0811 00:54:43.524971 1439337 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0811 00:54:43.524987 1439337 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/1387367.pem -> /usr/share/ca-certificates/1387367.pem
	I0811 00:54:43.525398 1439337 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0811 00:54:43.544023 1439337 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0811 00:54:43.560663 1439337 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0811 00:54:43.577005 1439337 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0811 00:54:43.593430 1439337 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/13873672.pem --> /usr/share/ca-certificates/13873672.pem (1708 bytes)
	I0811 00:54:43.609865 1439337 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0811 00:54:43.626164 1439337 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/1387367.pem --> /usr/share/ca-certificates/1387367.pem (1338 bytes)
	I0811 00:54:43.642676 1439337 ssh_runner.go:149] Run: openssl version
	I0811 00:54:43.647038 1439337 command_runner.go:124] > OpenSSL 1.1.1f  31 Mar 2020
	I0811 00:54:43.647406 1439337 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0811 00:54:43.654398 1439337 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0811 00:54:43.656948 1439337 command_runner.go:124] > -rw-r--r-- 1 root root 1111 Aug 11 00:30 /usr/share/ca-certificates/minikubeCA.pem
	I0811 00:54:43.657286 1439337 certs.go:416] hashing: -rw-r--r-- 1 root root 1111 Aug 11 00:30 /usr/share/ca-certificates/minikubeCA.pem
	I0811 00:54:43.657333 1439337 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0811 00:54:43.661954 1439337 command_runner.go:124] > b5213941
	I0811 00:54:43.662017 1439337 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0811 00:54:43.668789 1439337 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1387367.pem && ln -fs /usr/share/ca-certificates/1387367.pem /etc/ssl/certs/1387367.pem"
	I0811 00:54:43.675944 1439337 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1387367.pem
	I0811 00:54:43.678752 1439337 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Aug 11 00:46 /usr/share/ca-certificates/1387367.pem
	I0811 00:54:43.678990 1439337 certs.go:416] hashing: -rw-r--r-- 1 root root 1338 Aug 11 00:46 /usr/share/ca-certificates/1387367.pem
	I0811 00:54:43.679040 1439337 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1387367.pem
	I0811 00:54:43.683512 1439337 command_runner.go:124] > 51391683
	I0811 00:54:43.683852 1439337 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1387367.pem /etc/ssl/certs/51391683.0"
	I0811 00:54:43.691161 1439337 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13873672.pem && ln -fs /usr/share/ca-certificates/13873672.pem /etc/ssl/certs/13873672.pem"
	I0811 00:54:43.697979 1439337 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/13873672.pem
	I0811 00:54:43.700576 1439337 command_runner.go:124] > -rw-r--r-- 1 root root 1708 Aug 11 00:46 /usr/share/ca-certificates/13873672.pem
	I0811 00:54:43.700866 1439337 certs.go:416] hashing: -rw-r--r-- 1 root root 1708 Aug 11 00:46 /usr/share/ca-certificates/13873672.pem
	I0811 00:54:43.700918 1439337 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13873672.pem
	I0811 00:54:43.705434 1439337 command_runner.go:124] > 3ec20f2e
	I0811 00:54:43.705777 1439337 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/13873672.pem /etc/ssl/certs/3ec20f2e.0"
	I0811 00:54:43.712454 1439337 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}}
	I0811 00:54:43.832028 1439337 command_runner.go:124] > cgroupfs
	I0811 00:54:43.835534 1439337 cni.go:93] Creating CNI manager for ""
	I0811 00:54:43.835579 1439337 cni.go:154] 2 nodes found, recommending kindnet
	I0811 00:54:43.835604 1439337 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0811 00:54:43.835627 1439337 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.3 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-20210811005307-1387367 NodeName:multinode-20210811005307-1387367-m02 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.3 CgroupDriver:cgroupfs ClientCA
File:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0811 00:54:43.835770 1439337 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "multinode-20210811005307-1387367-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0811 00:54:43.835865 1439337 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=multinode-20210811005307-1387367-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:multinode-20210811005307-1387367 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0811 00:54:43.835935 1439337 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0811 00:54:43.841837 1439337 command_runner.go:124] > kubeadm
	I0811 00:54:43.841879 1439337 command_runner.go:124] > kubectl
	I0811 00:54:43.841897 1439337 command_runner.go:124] > kubelet
	I0811 00:54:43.842792 1439337 binaries.go:44] Found k8s binaries, skipping transfer
	I0811 00:54:43.842856 1439337 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0811 00:54:43.850102 1439337 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (414 bytes)
	I0811 00:54:43.862601 1439337 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0811 00:54:43.874704 1439337 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0811 00:54:43.877535 1439337 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0811 00:54:43.885806 1439337 host.go:66] Checking if "multinode-20210811005307-1387367" exists ...
	I0811 00:54:43.886293 1439337 start.go:241] JoinCluster: &{Name:multinode-20210811005307-1387367 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:multinode-20210811005307-1387367 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServer
IPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:0 KubernetesVersion:v1.21.3 ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0}
	I0811 00:54:43.886377 1439337 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm token create --print-join-command --ttl=0"
	I0811 00:54:43.886423 1439337 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210811005307-1387367
	I0811 00:54:43.919179 1439337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50285 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210811005307-1387367/id_rsa Username:docker}
	I0811 00:54:44.081445 1439337 command_runner.go:124] > kubeadm join control-plane.minikube.internal:8443 --token d5zr0h.q1pm3uca3ghnt70i --discovery-token-ca-cert-hash sha256:de7b801124e562bd66867fe5271994d6be7651a35fa31dfce01acdef2a9271b2 
	I0811 00:54:44.086975 1439337 start.go:262] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.49.3 Port:0 KubernetesVersion:v1.21.3 ControlPlane:false Worker:true}
	I0811 00:54:44.087014 1439337 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm join control-plane.minikube.internal:8443 --token d5zr0h.q1pm3uca3ghnt70i --discovery-token-ca-cert-hash sha256:de7b801124e562bd66867fe5271994d6be7651a35fa31dfce01acdef2a9271b2 --ignore-preflight-errors=all --cri-socket /var/run/dockershim.sock --node-name=multinode-20210811005307-1387367-m02"
	I0811 00:54:44.143277 1439337 command_runner.go:124] > [preflight] Running pre-flight checks
	I0811 00:54:44.451763 1439337 command_runner.go:124] > [preflight] The system verification failed. Printing the output from the verification:
	I0811 00:54:44.451784 1439337 command_runner.go:124] > KERNEL_VERSION: 5.8.0-1041-aws
	I0811 00:54:44.451792 1439337 command_runner.go:124] > DOCKER_VERSION: 20.10.7
	I0811 00:54:44.451800 1439337 command_runner.go:124] > DOCKER_GRAPH_DRIVER: overlay2
	I0811 00:54:44.451806 1439337 command_runner.go:124] > OS: Linux
	I0811 00:54:44.451813 1439337 command_runner.go:124] > CGROUPS_CPU: enabled
	I0811 00:54:44.451820 1439337 command_runner.go:124] > CGROUPS_CPUACCT: enabled
	I0811 00:54:44.451826 1439337 command_runner.go:124] > CGROUPS_CPUSET: enabled
	I0811 00:54:44.451833 1439337 command_runner.go:124] > CGROUPS_DEVICES: enabled
	I0811 00:54:44.451846 1439337 command_runner.go:124] > CGROUPS_FREEZER: enabled
	I0811 00:54:44.451852 1439337 command_runner.go:124] > CGROUPS_MEMORY: enabled
	I0811 00:54:44.451859 1439337 command_runner.go:124] > CGROUPS_PIDS: enabled
	I0811 00:54:44.451866 1439337 command_runner.go:124] > CGROUPS_HUGETLB: enabled
	I0811 00:54:44.611961 1439337 command_runner.go:124] > [preflight] Reading configuration from the cluster...
	I0811 00:54:44.611993 1439337 command_runner.go:124] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0811 00:54:44.644269 1439337 command_runner.go:124] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0811 00:54:44.644549 1439337 command_runner.go:124] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0811 00:54:44.644570 1439337 command_runner.go:124] > [kubelet-start] Starting the kubelet
	I0811 00:54:44.745915 1439337 command_runner.go:124] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0811 00:54:50.310561 1439337 command_runner.go:124] > This node has joined the cluster:
	I0811 00:54:50.310586 1439337 command_runner.go:124] > * Certificate signing request was sent to apiserver and a response was received.
	I0811 00:54:50.310595 1439337 command_runner.go:124] > * The Kubelet was informed of the new secure connection details.
	I0811 00:54:50.310604 1439337 command_runner.go:124] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0811 00:54:50.313793 1439337 command_runner.go:124] ! 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0811 00:54:50.313824 1439337 command_runner.go:124] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.8.0-1041-aws\n", err: exit status 1
	I0811 00:54:50.313836 1439337 command_runner.go:124] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0811 00:54:50.313856 1439337 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm join control-plane.minikube.internal:8443 --token d5zr0h.q1pm3uca3ghnt70i --discovery-token-ca-cert-hash sha256:de7b801124e562bd66867fe5271994d6be7651a35fa31dfce01acdef2a9271b2 --ignore-preflight-errors=all --cri-socket /var/run/dockershim.sock --node-name=multinode-20210811005307-1387367-m02": (6.226830739s)
	I0811 00:54:50.313871 1439337 ssh_runner.go:149] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0811 00:54:50.408968 1439337 command_runner.go:124] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I0811 00:54:50.497995 1439337 start.go:243] JoinCluster complete in 6.611697866s
	I0811 00:54:50.498018 1439337 cni.go:93] Creating CNI manager for ""
	I0811 00:54:50.498025 1439337 cni.go:154] 2 nodes found, recommending kindnet
	I0811 00:54:50.498081 1439337 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0811 00:54:50.502538 1439337 command_runner.go:124] >   File: /opt/cni/bin/portmap
	I0811 00:54:50.502566 1439337 command_runner.go:124] >   Size: 2603192   	Blocks: 5088       IO Block: 4096   regular file
	I0811 00:54:50.502575 1439337 command_runner.go:124] > Device: 3fh/63d	Inode: 2356928     Links: 1
	I0811 00:54:50.502583 1439337 command_runner.go:124] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0811 00:54:50.502590 1439337 command_runner.go:124] > Access: 2021-02-10 15:18:15.000000000 +0000
	I0811 00:54:50.502597 1439337 command_runner.go:124] > Modify: 2021-02-10 15:18:15.000000000 +0000
	I0811 00:54:50.502604 1439337 command_runner.go:124] > Change: 2021-07-02 14:49:52.887930340 +0000
	I0811 00:54:50.502608 1439337 command_runner.go:124] >  Birth: -
	I0811 00:54:50.502884 1439337 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0811 00:54:50.502898 1439337 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0811 00:54:50.515400 1439337 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0811 00:54:50.745633 1439337 command_runner.go:124] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0811 00:54:50.748145 1439337 command_runner.go:124] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0811 00:54:50.750666 1439337 command_runner.go:124] > serviceaccount/kindnet unchanged
	I0811 00:54:50.762393 1439337 command_runner.go:124] > daemonset.apps/kindnet configured
	I0811 00:54:50.770246 1439337 start.go:226] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:0 KubernetesVersion:v1.21.3 ControlPlane:false Worker:true}
	I0811 00:54:50.772936 1439337 out.go:177] * Verifying Kubernetes components...
	I0811 00:54:50.773050 1439337 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0811 00:54:50.783888 1439337 loader.go:372] Config loaded from file:  /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	I0811 00:54:50.784194 1439337 kapi.go:59] client config for multinode-20210811005307-1387367: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-202
10811005307-1387367/client.key", CAFile:"/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1115760), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0811 00:54:50.785499 1439337 node_ready.go:35] waiting up to 6m0s for node "multinode-20210811005307-1387367-m02" to be "Ready" ...
	I0811 00:54:50.785578 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367-m02
	I0811 00:54:50.785589 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:50.785595 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:50.785604 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:50.787598 1439337 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0811 00:54:50.787618 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:50.787623 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:50.787627 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:50.787631 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:50 GMT
	I0811 00:54:50.787634 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:50.787638 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:50.787760 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367-m02","uid":"112846e3-2407-4121-9cea-7dab53ea41fd","resourceVersion":"568","creationTimestamp":"2021-08-11T00:54:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":
"2021-08-11T00:54:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata" [truncated 4364 chars]
	I0811 00:54:51.288736 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367-m02
	I0811 00:54:51.288761 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:51.288767 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:51.288772 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:51.291186 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:51.291202 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:51.291207 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:51.291210 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:51 GMT
	I0811 00:54:51.291214 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:51.291218 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:51.291221 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:51.291704 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367-m02","uid":"112846e3-2407-4121-9cea-7dab53ea41fd","resourceVersion":"568","creationTimestamp":"2021-08-11T00:54:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":
"2021-08-11T00:54:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata" [truncated 4364 chars]
	I0811 00:54:51.788205 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367-m02
	I0811 00:54:51.788228 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:51.788235 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:51.788240 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:51.804491 1439337 round_trippers.go:457] Response Status: 200 OK in 16 milliseconds
	I0811 00:54:51.804511 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:51.804517 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:51.804521 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:51.804525 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:51.804528 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:51.804532 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:51 GMT
	I0811 00:54:51.805979 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367-m02","uid":"112846e3-2407-4121-9cea-7dab53ea41fd","resourceVersion":"568","creationTimestamp":"2021-08-11T00:54:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":
"2021-08-11T00:54:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata" [truncated 4364 chars]
	I0811 00:54:52.288821 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367-m02
	I0811 00:54:52.288853 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:52.288860 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:52.288865 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:52.291107 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:52.291125 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:52.291130 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:52.291134 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:52.291140 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:52 GMT
	I0811 00:54:52.291143 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:52.291146 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:52.291300 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367-m02","uid":"112846e3-2407-4121-9cea-7dab53ea41fd","resourceVersion":"568","creationTimestamp":"2021-08-11T00:54:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":
"2021-08-11T00:54:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata" [truncated 4364 chars]
	I0811 00:54:52.788388 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367-m02
	I0811 00:54:52.788416 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:52.788423 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:52.788428 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:52.799716 1439337 round_trippers.go:457] Response Status: 200 OK in 11 milliseconds
	I0811 00:54:52.799742 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:52.799748 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:52.799752 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:52.799756 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:52 GMT
	I0811 00:54:52.799760 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:52.799764 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:52.800216 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367-m02","uid":"112846e3-2407-4121-9cea-7dab53ea41fd","resourceVersion":"568","creationTimestamp":"2021-08-11T00:54:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":
"2021-08-11T00:54:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata" [truncated 4364 chars]
	I0811 00:54:52.800472 1439337 node_ready.go:58] node "multinode-20210811005307-1387367-m02" has status "Ready":"False"
	I0811 00:54:53.288398 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367-m02
	I0811 00:54:53.288420 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:53.288427 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:53.288431 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:53.290655 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:53.290670 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:53.290675 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:53.290679 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:53.290682 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:53.290686 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:53 GMT
	I0811 00:54:53.290690 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:53.290861 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367-m02","uid":"112846e3-2407-4121-9cea-7dab53ea41fd","resourceVersion":"568","creationTimestamp":"2021-08-11T00:54:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":
"2021-08-11T00:54:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata" [truncated 4364 chars]
	I0811 00:54:53.788408 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367-m02
	I0811 00:54:53.788429 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:53.788435 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:53.788439 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:53.790943 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:53.790959 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:53.790964 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:53.790968 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:53.790971 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:53.790975 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:53.790978 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:53 GMT
	I0811 00:54:53.791439 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367-m02","uid":"112846e3-2407-4121-9cea-7dab53ea41fd","resourceVersion":"568","creationTimestamp":"2021-08-11T00:54:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":
"2021-08-11T00:54:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata" [truncated 4364 chars]
	I0811 00:54:54.288347 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367-m02
	I0811 00:54:54.288372 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:54.288378 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:54.288383 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:54.290534 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:54.290550 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:54.290555 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:54.290559 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:54.290563 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:54.290567 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:54.290570 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:54 GMT
	I0811 00:54:54.290671 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367-m02","uid":"112846e3-2407-4121-9cea-7dab53ea41fd","resourceVersion":"568","creationTimestamp":"2021-08-11T00:54:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":
"2021-08-11T00:54:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata" [truncated 4364 chars]
	I0811 00:54:54.788225 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367-m02
	I0811 00:54:54.788246 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:54.788252 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:54.788257 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:54.790654 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:54.790671 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:54.790676 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:54.790680 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:54.790684 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:54.790688 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:54.790692 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:54 GMT
	I0811 00:54:54.790800 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367-m02","uid":"112846e3-2407-4121-9cea-7dab53ea41fd","resourceVersion":"582","creationTimestamp":"2021-08-11T00:54:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/host
name":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephe [truncated 4473 chars]
	I0811 00:54:55.288569 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367-m02
	I0811 00:54:55.288592 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:55.288598 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:55.288603 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:55.294389 1439337 round_trippers.go:457] Response Status: 200 OK in 5 milliseconds
	I0811 00:54:55.294409 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:55.294414 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:55.294418 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:55.294422 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:55.294426 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:55.294430 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:55 GMT
	I0811 00:54:55.294547 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367-m02","uid":"112846e3-2407-4121-9cea-7dab53ea41fd","resourceVersion":"582","creationTimestamp":"2021-08-11T00:54:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/host
name":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephe [truncated 4473 chars]
	I0811 00:54:55.294808 1439337 node_ready.go:58] node "multinode-20210811005307-1387367-m02" has status "Ready":"False"
	I0811 00:54:55.788183 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367-m02
	I0811 00:54:55.788203 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:55.788209 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:55.788214 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:55.790252 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:55.790269 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:55.790274 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:55 GMT
	I0811 00:54:55.790279 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:55.790282 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:55.790285 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:55.790289 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:55.790400 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367-m02","uid":"112846e3-2407-4121-9cea-7dab53ea41fd","resourceVersion":"582","creationTimestamp":"2021-08-11T00:54:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/host
name":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephe [truncated 4473 chars]
	I0811 00:54:56.288311 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367-m02
	I0811 00:54:56.288341 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:56.288347 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:56.288352 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:56.290684 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:56.290701 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:56.290705 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:56.290709 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:56.290713 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:56.290716 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:56.290720 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:56 GMT
	I0811 00:54:56.290857 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367-m02","uid":"112846e3-2407-4121-9cea-7dab53ea41fd","resourceVersion":"582","creationTimestamp":"2021-08-11T00:54:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/host
name":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephe [truncated 4473 chars]
	I0811 00:54:56.788204 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367-m02
	I0811 00:54:56.788231 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:56.788237 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:56.788242 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:56.791372 1439337 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0811 00:54:56.791395 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:56.791401 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:56.791408 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:56 GMT
	I0811 00:54:56.791413 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:56.791417 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:56.791420 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:56.791547 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367-m02","uid":"112846e3-2407-4121-9cea-7dab53ea41fd","resourceVersion":"582","creationTimestamp":"2021-08-11T00:54:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/host
name":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephe [truncated 4473 chars]
	I0811 00:54:57.288521 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367-m02
	I0811 00:54:57.288565 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:57.288572 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:57.288577 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:57.290841 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:57.290859 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:57.290864 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:57.290868 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:57.290872 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:57.290875 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:57.290879 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:57 GMT
	I0811 00:54:57.291017 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367-m02","uid":"112846e3-2407-4121-9cea-7dab53ea41fd","resourceVersion":"582","creationTimestamp":"2021-08-11T00:54:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/host
name":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephe [truncated 4473 chars]
	I0811 00:54:57.788232 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367-m02
	I0811 00:54:57.788261 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:57.788267 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:57.788272 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:57.791325 1439337 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0811 00:54:57.791344 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:57.791351 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:57.791355 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:57.791359 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:57.791363 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:57.791368 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:57 GMT
	I0811 00:54:57.791547 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367-m02","uid":"112846e3-2407-4121-9cea-7dab53ea41fd","resourceVersion":"582","creationTimestamp":"2021-08-11T00:54:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/host
name":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephe [truncated 4473 chars]
	I0811 00:54:57.791821 1439337 node_ready.go:58] node "multinode-20210811005307-1387367-m02" has status "Ready":"False"
	I0811 00:54:58.288182 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367-m02
	I0811 00:54:58.288209 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:58.288218 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:58.288222 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:58.290540 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:58.290564 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:58.290569 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:58.290573 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:58 GMT
	I0811 00:54:58.290577 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:58.290581 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:58.290585 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:58.290771 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367-m02","uid":"112846e3-2407-4121-9cea-7dab53ea41fd","resourceVersion":"582","creationTimestamp":"2021-08-11T00:54:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/host
name":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephe [truncated 4473 chars]
	I0811 00:54:58.788724 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367-m02
	I0811 00:54:58.788754 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:58.788760 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:58.788765 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:58.791510 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:58.791534 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:58.791539 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:58.791543 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:58 GMT
	I0811 00:54:58.791546 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:58.791550 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:58.791553 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:58.791686 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367-m02","uid":"112846e3-2407-4121-9cea-7dab53ea41fd","resourceVersion":"582","creationTimestamp":"2021-08-11T00:54:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/host
name":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephe [truncated 4473 chars]
	I0811 00:54:59.288590 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367-m02
	I0811 00:54:59.288618 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:59.288625 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:59.288630 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:59.291031 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:59.291053 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:59.291058 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:59.291062 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:59.291065 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:59.291071 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:59.291075 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:59 GMT
	I0811 00:54:59.291248 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367-m02","uid":"112846e3-2407-4121-9cea-7dab53ea41fd","resourceVersion":"582","creationTimestamp":"2021-08-11T00:54:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/host
name":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephe [truncated 4473 chars]
	I0811 00:54:59.789056 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367-m02
	I0811 00:54:59.789079 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:59.789085 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:59.789090 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:59.791574 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:59.791592 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:59.791597 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:59.791601 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:59.791604 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:59.791608 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:59.791612 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:59 GMT
	I0811 00:54:59.791731 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367-m02","uid":"112846e3-2407-4121-9cea-7dab53ea41fd","resourceVersion":"582","creationTimestamp":"2021-08-11T00:54:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/host
name":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephe [truncated 4473 chars]
	I0811 00:54:59.791978 1439337 node_ready.go:58] node "multinode-20210811005307-1387367-m02" has status "Ready":"False"
	I0811 00:55:00.288225 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367-m02
	I0811 00:55:00.288252 1439337 round_trippers.go:438] Request Headers:
	I0811 00:55:00.288259 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:55:00.288264 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:55:00.290393 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:55:00.290409 1439337 round_trippers.go:460] Response Headers:
	I0811 00:55:00.290414 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:55:00.290418 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:55:00.290422 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:55:00.290426 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:55:00.290429 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:55:00 GMT
	I0811 00:55:00.290538 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367-m02","uid":"112846e3-2407-4121-9cea-7dab53ea41fd","resourceVersion":"595","creationTimestamp":"2021-08-11T00:54:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:m
etadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spe [truncated 4508 chars]
	I0811 00:55:00.290782 1439337 node_ready.go:49] node "multinode-20210811005307-1387367-m02" has status "Ready":"True"
	I0811 00:55:00.290790 1439337 node_ready.go:38] duration metric: took 9.50526433s waiting for node "multinode-20210811005307-1387367-m02" to be "Ready" ...
	I0811 00:55:00.290800 1439337 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0811 00:55:00.290863 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0811 00:55:00.290868 1439337 round_trippers.go:438] Request Headers:
	I0811 00:55:00.290875 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:55:00.290879 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:55:00.294373 1439337 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0811 00:55:00.294579 1439337 round_trippers.go:460] Response Headers:
	I0811 00:55:00.294606 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:55:00.294633 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:55:00.294650 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:55:00.294664 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:55:00.294692 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:55:00 GMT
	I0811 00:55:00.295211 1439337 request.go:1123] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"595"},"items":[{"metadata":{"name":"coredns-558bd4d5db-lpxc6","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"839d8a5e-9cef-4c9e-a07f-db7f529aaa6a","resourceVersion":"518","creationTimestamp":"2021-08-11T00:54:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"d2f4d58c-c53b-4251-9422-50d7bb166f11","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d2f4d58c-c53b-4251-9422-50d7bb166f11\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller"
:{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k: [truncated 69154 chars]
	I0811 00:55:00.299413 1439337 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-lpxc6" in "kube-system" namespace to be "Ready" ...
	I0811 00:55:00.299758 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-lpxc6
	I0811 00:55:00.299893 1439337 round_trippers.go:438] Request Headers:
	I0811 00:55:00.299920 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:55:00.299926 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:55:00.306045 1439337 round_trippers.go:457] Response Status: 200 OK in 6 milliseconds
	I0811 00:55:00.306067 1439337 round_trippers.go:460] Response Headers:
	I0811 00:55:00.306072 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:55:00.306076 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:55:00 GMT
	I0811 00:55:00.306080 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:55:00.306083 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:55:00.306087 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:55:00.306222 1439337 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-lpxc6","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"839d8a5e-9cef-4c9e-a07f-db7f529aaa6a","resourceVersion":"518","creationTimestamp":"2021-08-11T00:54:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"d2f4d58c-c53b-4251-9422-50d7bb166f11","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d2f4d58c-c53b-4251-9422-50d7bb166f11\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 6071 chars]
	I0811 00:55:00.306592 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:55:00.306610 1439337 round_trippers.go:438] Request Headers:
	I0811 00:55:00.306615 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:55:00.306620 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:55:00.308138 1439337 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0811 00:55:00.308163 1439337 round_trippers.go:460] Response Headers:
	I0811 00:55:00.308168 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:55:00.308173 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:55:00.308176 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:55:00 GMT
	I0811 00:55:00.308180 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:55:00.308196 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:55:00.308310 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"498","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5272 chars]
	I0811 00:55:00.308598 1439337 pod_ready.go:92] pod "coredns-558bd4d5db-lpxc6" in "kube-system" namespace has status "Ready":"True"
	I0811 00:55:00.308619 1439337 pod_ready.go:81] duration metric: took 9.154807ms waiting for pod "coredns-558bd4d5db-lpxc6" in "kube-system" namespace to be "Ready" ...
	I0811 00:55:00.308643 1439337 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-20210811005307-1387367" in "kube-system" namespace to be "Ready" ...
	I0811 00:55:00.308711 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210811005307-1387367
	I0811 00:55:00.308722 1439337 round_trippers.go:438] Request Headers:
	I0811 00:55:00.308727 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:55:00.308740 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:55:00.310463 1439337 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0811 00:55:00.310483 1439337 round_trippers.go:460] Response Headers:
	I0811 00:55:00.310488 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:55:00.310492 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:55:00.310495 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:55:00.310499 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:55:00.310520 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:55:00 GMT
	I0811 00:55:00.310631 1439337 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210811005307-1387367","namespace":"kube-system","uid":"b98555c3-d9ce-452c-a2de-7ee50a50311d","resourceVersion":"459","creationTimestamp":"2021-08-11T00:53:51Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"70ae736662f600440da0a55cde86b0f8","kubernetes.io/config.mirror":"70ae736662f600440da0a55cde86b0f8","kubernetes.io/config.seen":"2021-08-11T00:53:47.643869676Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5588 chars]
	I0811 00:55:00.310940 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:55:00.310956 1439337 round_trippers.go:438] Request Headers:
	I0811 00:55:00.310962 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:55:00.310967 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:55:00.312442 1439337 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0811 00:55:00.312462 1439337 round_trippers.go:460] Response Headers:
	I0811 00:55:00.312467 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:55:00.312471 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:55:00.312474 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:55:00.312478 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:55:00 GMT
	I0811 00:55:00.312509 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:55:00.312613 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"498","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5272 chars]
	I0811 00:55:00.312873 1439337 pod_ready.go:92] pod "etcd-multinode-20210811005307-1387367" in "kube-system" namespace has status "Ready":"True"
	I0811 00:55:00.312888 1439337 pod_ready.go:81] duration metric: took 4.233467ms waiting for pod "etcd-multinode-20210811005307-1387367" in "kube-system" namespace to be "Ready" ...
	I0811 00:55:00.312903 1439337 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-20210811005307-1387367" in "kube-system" namespace to be "Ready" ...
	I0811 00:55:00.312950 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20210811005307-1387367
	I0811 00:55:00.312961 1439337 round_trippers.go:438] Request Headers:
	I0811 00:55:00.312966 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:55:00.312971 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:55:00.314738 1439337 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0811 00:55:00.314754 1439337 round_trippers.go:460] Response Headers:
	I0811 00:55:00.314758 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:55:00.314762 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:55:00.314765 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:55:00.314770 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:55:00.314775 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:55:00 GMT
	I0811 00:55:00.314882 1439337 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20210811005307-1387367","namespace":"kube-system","uid":"520b1e32-479d-4e0e-8867-276c958ae125","resourceVersion":"460","creationTimestamp":"2021-08-11T00:53:45Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8443","kubernetes.io/config.hash":"74969952953b6d01bc2817560a3e688d","kubernetes.io/config.mirror":"74969952953b6d01bc2817560a3e688d","kubernetes.io/config.seen":"2021-08-11T00:53:31.835501949Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-addr [truncated 8113 chars]
	I0811 00:55:00.315228 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:55:00.315239 1439337 round_trippers.go:438] Request Headers:
	I0811 00:55:00.315244 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:55:00.315249 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:55:00.316853 1439337 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0811 00:55:00.316888 1439337 round_trippers.go:460] Response Headers:
	I0811 00:55:00.316921 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:55:00.316940 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:55:00.316959 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:55:00 GMT
	I0811 00:55:00.316964 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:55:00.316968 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:55:00.317095 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"498","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5272 chars]
	I0811 00:55:00.317350 1439337 pod_ready.go:92] pod "kube-apiserver-multinode-20210811005307-1387367" in "kube-system" namespace has status "Ready":"True"
	I0811 00:55:00.317365 1439337 pod_ready.go:81] duration metric: took 4.452945ms waiting for pod "kube-apiserver-multinode-20210811005307-1387367" in "kube-system" namespace to be "Ready" ...
	I0811 00:55:00.317375 1439337 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-20210811005307-1387367" in "kube-system" namespace to be "Ready" ...
	I0811 00:55:00.317426 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20210811005307-1387367
	I0811 00:55:00.317437 1439337 round_trippers.go:438] Request Headers:
	I0811 00:55:00.317441 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:55:00.317446 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:55:00.319081 1439337 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0811 00:55:00.319109 1439337 round_trippers.go:460] Response Headers:
	I0811 00:55:00.319126 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:55:00.319129 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:55:00 GMT
	I0811 00:55:00.319133 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:55:00.319136 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:55:00.319140 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:55:00.319261 1439337 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20210811005307-1387367","namespace":"kube-system","uid":"f0ca8783-2ede-4c80-adc7-94aa58a85ad1","resourceVersion":"462","creationTimestamp":"2021-08-11T00:53:45Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"cfbf57d2192b91a488c5172bd9546eeb","kubernetes.io/config.mirror":"cfbf57d2192b91a488c5172bd9546eeb","kubernetes.io/config.seen":"2021-08-11T00:53:31.835503352Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/c
onfig.mirror":{},"f:kubernetes.io/config.seen":{},"f:kubernetes.io/conf [truncated 7679 chars]
	I0811 00:55:00.319596 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:55:00.319613 1439337 round_trippers.go:438] Request Headers:
	I0811 00:55:00.319618 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:55:00.319622 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:55:00.321381 1439337 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0811 00:55:00.321401 1439337 round_trippers.go:460] Response Headers:
	I0811 00:55:00.321406 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:55:00.321409 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:55:00.321413 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:55:00.321435 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:55:00.321439 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:55:00 GMT
	I0811 00:55:00.321750 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"498","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5272 chars]
	I0811 00:55:00.322017 1439337 pod_ready.go:92] pod "kube-controller-manager-multinode-20210811005307-1387367" in "kube-system" namespace has status "Ready":"True"
	I0811 00:55:00.322034 1439337 pod_ready.go:81] duration metric: took 4.644993ms waiting for pod "kube-controller-manager-multinode-20210811005307-1387367" in "kube-system" namespace to be "Ready" ...
	I0811 00:55:00.322045 1439337 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-29jgc" in "kube-system" namespace to be "Ready" ...
	I0811 00:55:00.489237 1439337 request.go:600] Waited for 167.13354ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-29jgc
	I0811 00:55:00.489343 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-29jgc
	I0811 00:55:00.489371 1439337 round_trippers.go:438] Request Headers:
	I0811 00:55:00.489384 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:55:00.489390 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:55:00.491649 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:55:00.491686 1439337 round_trippers.go:460] Response Headers:
	I0811 00:55:00.491691 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:55:00.491695 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:55:00.491699 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:55:00.491702 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:55:00.491706 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:55:00 GMT
	I0811 00:55:00.492004 1439337 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-29jgc","generateName":"kube-proxy-","namespace":"kube-system","uid":"4cd8a483-2d40-4f4a-817d-8330332fe9bc","resourceVersion":"578","creationTimestamp":"2021-08-11T00:54:49Z","labels":{"controller-revision-hash":"7cdcb64568","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"37aa45af-7498-4003-abc1-af1fe65a80b1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37aa45af-7498-4003-abc1-af1fe65a80b1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller
":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:affinity":{".": [truncated 5785 chars]
	I0811 00:55:00.688667 1439337 request.go:600] Waited for 196.270465ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367-m02
	I0811 00:55:00.688753 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367-m02
	I0811 00:55:00.688766 1439337 round_trippers.go:438] Request Headers:
	I0811 00:55:00.688814 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:55:00.688827 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:55:00.691188 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:55:00.691209 1439337 round_trippers.go:460] Response Headers:
	I0811 00:55:00.691214 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:55:00.691218 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:55:00.691222 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:55:00.691225 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:55:00.691229 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:55:00 GMT
	I0811 00:55:00.691336 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367-m02","uid":"112846e3-2407-4121-9cea-7dab53ea41fd","resourceVersion":"595","creationTimestamp":"2021-08-11T00:54:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:m
etadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spe [truncated 4508 chars]
	I0811 00:55:00.691595 1439337 pod_ready.go:92] pod "kube-proxy-29jgc" in "kube-system" namespace has status "Ready":"True"
	I0811 00:55:00.691609 1439337 pod_ready.go:81] duration metric: took 369.557115ms waiting for pod "kube-proxy-29jgc" in "kube-system" namespace to be "Ready" ...
	I0811 00:55:00.691619 1439337 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sjx8s" in "kube-system" namespace to be "Ready" ...
	I0811 00:55:00.888988 1439337 request.go:600] Waited for 197.303871ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sjx8s
	I0811 00:55:00.889087 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sjx8s
	I0811 00:55:00.889138 1439337 round_trippers.go:438] Request Headers:
	I0811 00:55:00.889152 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:55:00.889165 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:55:00.891617 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:55:00.891635 1439337 round_trippers.go:460] Response Headers:
	I0811 00:55:00.891640 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:55:00.891643 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:55:00.891647 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:55:00 GMT
	I0811 00:55:00.891653 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:55:00.891657 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:55:00.891972 1439337 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-sjx8s","generateName":"kube-proxy-","namespace":"kube-system","uid":"b7a97e6a-09fd-4f56-9ee7-9ebd40c689f7","resourceVersion":"482","creationTimestamp":"2021-08-11T00:54:00Z","labels":{"controller-revision-hash":"7cdcb64568","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"37aa45af-7498-4003-abc1-af1fe65a80b1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37aa45af-7498-4003-abc1-af1fe65a80b1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller
":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:affinity":{".": [truncated 5777 chars]
	I0811 00:55:01.088728 1439337 request.go:600] Waited for 196.322674ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:55:01.088801 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:55:01.088812 1439337 round_trippers.go:438] Request Headers:
	I0811 00:55:01.088821 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:55:01.088873 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:55:01.091359 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:55:01.091413 1439337 round_trippers.go:460] Response Headers:
	I0811 00:55:01.091431 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:55:01 GMT
	I0811 00:55:01.091447 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:55:01.091464 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:55:01.091492 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:55:01.091497 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:55:01.091604 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"498","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5272 chars]
	I0811 00:55:01.091881 1439337 pod_ready.go:92] pod "kube-proxy-sjx8s" in "kube-system" namespace has status "Ready":"True"
	I0811 00:55:01.091897 1439337 pod_ready.go:81] duration metric: took 400.265134ms waiting for pod "kube-proxy-sjx8s" in "kube-system" namespace to be "Ready" ...
	I0811 00:55:01.091908 1439337 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-20210811005307-1387367" in "kube-system" namespace to be "Ready" ...
	I0811 00:55:01.288291 1439337 request.go:600] Waited for 196.314683ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20210811005307-1387367
	I0811 00:55:01.288402 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20210811005307-1387367
	I0811 00:55:01.288414 1439337 round_trippers.go:438] Request Headers:
	I0811 00:55:01.288420 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:55:01.288426 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:55:01.290703 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:55:01.290752 1439337 round_trippers.go:460] Response Headers:
	I0811 00:55:01.290769 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:55:01.290786 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:55:01.290807 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:55:01.290831 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:55:01 GMT
	I0811 00:55:01.290841 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:55:01.290941 1439337 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-20210811005307-1387367","namespace":"kube-system","uid":"7a24d14d-4566-4ab3-a237-634064615837","resourceVersion":"476","creationTimestamp":"2021-08-11T00:53:51Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"215965f927d1bdc023cfbcf159bba72a","kubernetes.io/config.mirror":"215965f927d1bdc023cfbcf159bba72a","kubernetes.io/config.seen":"2021-08-11T00:53:47.643889688Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"
f:kubernetes.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f: [truncated 4561 chars]
	I0811 00:55:01.488440 1439337 request.go:600] Waited for 197.166247ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:55:01.488506 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:55:01.488517 1439337 round_trippers.go:438] Request Headers:
	I0811 00:55:01.488541 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:55:01.488546 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:55:01.490746 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:55:01.490767 1439337 round_trippers.go:460] Response Headers:
	I0811 00:55:01.490773 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:55:01.490777 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:55:01.490780 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:55:01.490784 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:55:01.490799 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:55:01 GMT
	I0811 00:55:01.491254 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"498","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5272 chars]
	I0811 00:55:01.491582 1439337 pod_ready.go:92] pod "kube-scheduler-multinode-20210811005307-1387367" in "kube-system" namespace has status "Ready":"True"
	I0811 00:55:01.491595 1439337 pod_ready.go:81] duration metric: took 399.677986ms waiting for pod "kube-scheduler-multinode-20210811005307-1387367" in "kube-system" namespace to be "Ready" ...
	I0811 00:55:01.491608 1439337 pod_ready.go:38] duration metric: took 1.20079796s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0811 00:55:01.491633 1439337 system_svc.go:44] waiting for kubelet service to be running ....
	I0811 00:55:01.491696 1439337 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0811 00:55:01.501953 1439337 system_svc.go:56] duration metric: took 10.31352ms WaitForService to wait for kubelet.
	I0811 00:55:01.501981 1439337 kubeadm.go:547] duration metric: took 10.731690233s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0811 00:55:01.502016 1439337 node_conditions.go:102] verifying NodePressure condition ...
	I0811 00:55:01.688350 1439337 request.go:600] Waited for 186.238183ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0811 00:55:01.688416 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes
	I0811 00:55:01.688426 1439337 round_trippers.go:438] Request Headers:
	I0811 00:55:01.688432 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:55:01.688439 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:55:01.690977 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:55:01.691050 1439337 round_trippers.go:460] Response Headers:
	I0811 00:55:01.691073 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:55:01 GMT
	I0811 00:55:01.691089 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:55:01.691103 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:55:01.691126 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:55:01.691150 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:55:01.691350 1439337 request.go:1123] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"596"},"items":[{"metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"498","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-
managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","o [truncated 10825 chars]
	I0811 00:55:01.691786 1439337 node_conditions.go:122] node storage ephemeral capacity is 60796312Ki
	I0811 00:55:01.691808 1439337 node_conditions.go:123] node cpu capacity is 2
	I0811 00:55:01.691819 1439337 node_conditions.go:122] node storage ephemeral capacity is 60796312Ki
	I0811 00:55:01.691828 1439337 node_conditions.go:123] node cpu capacity is 2
	I0811 00:55:01.691836 1439337 node_conditions.go:105] duration metric: took 189.808795ms to run NodePressure ...
	I0811 00:55:01.691848 1439337 start.go:231] waiting for startup goroutines ...
	I0811 00:55:01.758927 1439337 start.go:462] kubectl: 1.21.3, cluster: 1.21.3 (minor skew: 0)
	I0811 00:55:01.763608 1439337 out.go:177] * Done! kubectl is now configured to use "multinode-20210811005307-1387367" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2021-08-11 00:53:09 UTC, end at Wed 2021-08-11 01:05:05 UTC. --
	Aug 11 00:53:20 multinode-20210811005307-1387367 dockerd[456]: time="2021-08-11T00:53:20.135649184Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Aug 11 00:53:20 multinode-20210811005307-1387367 dockerd[456]: time="2021-08-11T00:53:20.135683817Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Aug 11 00:53:20 multinode-20210811005307-1387367 dockerd[456]: time="2021-08-11T00:53:20.135701171Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Aug 11 00:53:20 multinode-20210811005307-1387367 dockerd[456]: time="2021-08-11T00:53:20.135711485Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Aug 11 00:53:20 multinode-20210811005307-1387367 dockerd[456]: time="2021-08-11T00:53:20.145258347Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Aug 11 00:53:20 multinode-20210811005307-1387367 dockerd[456]: time="2021-08-11T00:53:20.151810323Z" level=warning msg="Your kernel does not support CPU realtime scheduler"
	Aug 11 00:53:20 multinode-20210811005307-1387367 dockerd[456]: time="2021-08-11T00:53:20.151841059Z" level=warning msg="Your kernel does not support cgroup blkio weight"
	Aug 11 00:53:20 multinode-20210811005307-1387367 dockerd[456]: time="2021-08-11T00:53:20.151848214Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
	Aug 11 00:53:20 multinode-20210811005307-1387367 dockerd[456]: time="2021-08-11T00:53:20.152001690Z" level=info msg="Loading containers: start."
	Aug 11 00:53:20 multinode-20210811005307-1387367 dockerd[456]: time="2021-08-11T00:53:20.363418038Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 11 00:53:20 multinode-20210811005307-1387367 dockerd[456]: time="2021-08-11T00:53:20.449682669Z" level=info msg="Loading containers: done."
	Aug 11 00:53:20 multinode-20210811005307-1387367 dockerd[456]: time="2021-08-11T00:53:20.467840513Z" level=info msg="Docker daemon" commit=b0f5bc3 graphdriver(s)=overlay2 version=20.10.7
	Aug 11 00:53:20 multinode-20210811005307-1387367 dockerd[456]: time="2021-08-11T00:53:20.467916632Z" level=info msg="Daemon has completed initialization"
	Aug 11 00:53:20 multinode-20210811005307-1387367 systemd[1]: Started Docker Application Container Engine.
	Aug 11 00:53:20 multinode-20210811005307-1387367 dockerd[456]: time="2021-08-11T00:53:20.517120905Z" level=info msg="API listen on [::]:2376"
	Aug 11 00:53:20 multinode-20210811005307-1387367 dockerd[456]: time="2021-08-11T00:53:20.519425777Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 11 00:55:04 multinode-20210811005307-1387367 dockerd[456]: time="2021-08-11T00:55:04.874364100Z" level=error msg="Not continuing with pull after error: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Aug 11 00:55:04 multinode-20210811005307-1387367 dockerd[456]: time="2021-08-11T00:55:04.876583138Z" level=error msg="Handler for POST /v1.41/images/create returned error: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Aug 11 00:55:19 multinode-20210811005307-1387367 dockerd[456]: time="2021-08-11T00:55:19.394093774Z" level=error msg="Not continuing with pull after error: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Aug 11 00:55:19 multinode-20210811005307-1387367 dockerd[456]: time="2021-08-11T00:55:19.396465664Z" level=error msg="Handler for POST /v1.41/images/create returned error: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Aug 11 00:55:44 multinode-20210811005307-1387367 dockerd[456]: time="2021-08-11T00:55:44.322643326Z" level=error msg="Not continuing with pull after error: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Aug 11 00:55:44 multinode-20210811005307-1387367 dockerd[456]: time="2021-08-11T00:55:44.325445564Z" level=error msg="Handler for POST /v1.41/images/create returned error: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Aug 11 00:56:27 multinode-20210811005307-1387367 dockerd[456]: time="2021-08-11T00:56:27.444963234Z" level=error msg="Not continuing with pull after error: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Aug 11 00:57:54 multinode-20210811005307-1387367 dockerd[456]: time="2021-08-11T00:57:54.516587981Z" level=error msg="Not continuing with pull after error: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Aug 11 01:00:44 multinode-20210811005307-1387367 dockerd[456]: time="2021-08-11T01:00:44.722322503Z" level=error msg="Not continuing with pull after error: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                      CREATED             STATE               NAME                      ATTEMPT             POD ID
	fb85947729a5e       1a1f05a2cd7c2                                                                              10 minutes ago      Running             coredns                   0                   3c3987c621535
	e7872a86c850c       ba04bb24b9575                                                                              10 minutes ago      Running             storage-provisioner       0                   36e2ef011ff7e
	cfc02224ae9dd       kindest/kindnetd@sha256:838bc1706e38391aefaa31fd52619fe8e57ad3dfb0d0ff414d902367fcc24c3c   11 minutes ago      Running             kindnet-cni               0                   c3e92f3f028a3
	e36db61f87b4d       4ea38350a1beb                                                                              11 minutes ago      Running             kube-proxy                0                   14679d4e451ff
	1c09dc0ad10ec       cb310ff289d79                                                                              11 minutes ago      Running             kube-controller-manager   0                   4e6e27aeb111d
	9e13d13bead3b       31a3b96cefc1e                                                                              11 minutes ago      Running             kube-scheduler            0                   4ec36aab9e5f6
	bfe8629569ccb       44a6d50ef170d                                                                              11 minutes ago      Running             kube-apiserver            0                   c7ed64fb2f162
	678b20fb70dc2       05b738aa1bc63                                                                              11 minutes ago      Running             etcd                      0                   2d1df52c9254e
	
	* 
	* ==> coredns [fb85947729a5] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
	CoreDNS-1.8.0
	linux/arm64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-20210811005307-1387367
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-20210811005307-1387367
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=877a5691753f15214a0c269ac69dcdc5a4d99fcd
	                    minikube.k8s.io/name=multinode-20210811005307-1387367
	                    minikube.k8s.io/updated_at=2021_08_11T00_53_48_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 11 Aug 2021 00:53:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-20210811005307-1387367
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 11 Aug 2021 01:05:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 11 Aug 2021 01:04:25 +0000   Wed, 11 Aug 2021 00:53:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 11 Aug 2021 01:04:25 +0000   Wed, 11 Aug 2021 00:53:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 11 Aug 2021 01:04:25 +0000   Wed, 11 Aug 2021 00:53:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 11 Aug 2021 01:04:25 +0000   Wed, 11 Aug 2021 00:54:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    multinode-20210811005307-1387367
	Capacity:
	  cpu:                2
	  ephemeral-storage:  60796312Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8033460Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  60796312Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8033460Ki
	  pods:               110
	System Info:
	  Machine ID:                 80c525a0c99c4bf099c0cbf9c365b032
	  System UUID:                a9502501-581c-4295-8dea-fb7a922e5304
	  Boot ID:                    dff2c102-a0cf-4fb0-a2ea-36617f3a3229
	  Kernel Version:             5.8.0-1041-aws
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.7
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-84b6686758-2jxsd                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 coredns-558bd4d5db-lpxc6                                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     11m
	  kube-system                 etcd-multinode-20210811005307-1387367                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-xqj59                                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      11m
	  kube-system                 kube-apiserver-multinode-20210811005307-1387367             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-multinode-20210811005307-1387367    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-sjx8s                                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-multinode-20210811005307-1387367             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 storage-provisioner                                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  NodeHasSufficientMemory  11m (x4 over 11m)  kubelet     Node multinode-20210811005307-1387367 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x4 over 11m)  kubelet     Node multinode-20210811005307-1387367 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x3 over 11m)  kubelet     Node multinode-20210811005307-1387367 status is now: NodeHasSufficientPID
	  Normal  Starting                 11m                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m                kubelet     Node multinode-20210811005307-1387367 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m                kubelet     Node multinode-20210811005307-1387367 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m                kubelet     Node multinode-20210811005307-1387367 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 11m                kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                10m                kubelet     Node multinode-20210811005307-1387367 status is now: NodeReady
	
	
	Name:               multinode-20210811005307-1387367-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-20210811005307-1387367-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 11 Aug 2021 00:54:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-20210811005307-1387367-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 11 Aug 2021 01:05:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 11 Aug 2021 01:00:21 +0000   Wed, 11 Aug 2021 00:54:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 11 Aug 2021 01:00:21 +0000   Wed, 11 Aug 2021 00:54:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 11 Aug 2021 01:00:21 +0000   Wed, 11 Aug 2021 00:54:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 11 Aug 2021 01:00:21 +0000   Wed, 11 Aug 2021 00:54:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    multinode-20210811005307-1387367-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  60796312Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8033460Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  60796312Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8033460Ki
	  pods:               110
	System Info:
	  Machine ID:                 80c525a0c99c4bf099c0cbf9c365b032
	  System UUID:                62b75c9b-274b-4e0a-a6bc-ecf3fccdcede
	  Boot ID:                    dff2c102-a0cf-4fb0-a2ea-36617f3a3229
	  Kernel Version:             5.8.0-1041-aws
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.7
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-84b6686758-c9mqs    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-bsbng               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      10m
	  kube-system                 kube-proxy-29jgc            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 10m                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x2 over 10m)  kubelet     Node multinode-20210811005307-1387367-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet     Node multinode-20210811005307-1387367-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x2 over 10m)  kubelet     Node multinode-20210811005307-1387367-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 10m                kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                10m                kubelet     Node multinode-20210811005307-1387367-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001093] FS-Cache: O-key=[8] '38a8010000000000'
	[  +0.000822] FS-Cache: N-cookie c=00000000aef8ae5b [p=00000000d00a921d fl=2 nc=0 na=1]
	[  +0.001337] FS-Cache: N-cookie d=00000000d0f41ca1 n=00000000cf2b9e77
	[  +0.001079] FS-Cache: N-key=[8] '38a8010000000000'
	[  +0.008061] FS-Cache: Duplicate cookie detected
	[  +0.000824] FS-Cache: O-cookie c=000000009e8af87d [p=00000000d00a921d fl=226 nc=0 na=1]
	[  +0.001345] FS-Cache: O-cookie d=00000000d0f41ca1 n=00000000882d24dd
	[  +0.001078] FS-Cache: O-key=[8] '38a8010000000000'
	[  +0.000828] FS-Cache: N-cookie c=00000000aef8ae5b [p=00000000d00a921d fl=2 nc=0 na=1]
	[  +0.001344] FS-Cache: N-cookie d=00000000d0f41ca1 n=000000006ce4882d
	[  +0.001069] FS-Cache: N-key=[8] '38a8010000000000'
	[  +1.509820] FS-Cache: Duplicate cookie detected
	[  +0.000799] FS-Cache: O-cookie c=00000000e1eedaf3 [p=00000000d00a921d fl=226 nc=0 na=1]
	[  +0.001318] FS-Cache: O-cookie d=00000000d0f41ca1 n=0000000025fbee24
	[  +0.001053] FS-Cache: O-key=[8] '37a8010000000000'
	[  +0.000829] FS-Cache: N-cookie c=000000006f83a19d [p=00000000d00a921d fl=2 nc=0 na=1]
	[  +0.001316] FS-Cache: N-cookie d=00000000d0f41ca1 n=00000000d322ea0c
	[  +0.001048] FS-Cache: N-key=[8] '37a8010000000000'
	[  +0.277640] FS-Cache: Duplicate cookie detected
	[  +0.000818] FS-Cache: O-cookie c=000000007ae3c387 [p=00000000d00a921d fl=226 nc=0 na=1]
	[  +0.001327] FS-Cache: O-cookie d=00000000d0f41ca1 n=000000004bd4688e
	[  +0.001069] FS-Cache: O-key=[8] '3ca8010000000000'
	[  +0.000853] FS-Cache: N-cookie c=0000000007642642 [p=00000000d00a921d fl=2 nc=0 na=1]
	[  +0.001309] FS-Cache: N-cookie d=00000000d0f41ca1 n=00000000ae88504f
	[  +0.001071] FS-Cache: N-key=[8] '3ca8010000000000'
	
	* 
	* ==> etcd [678b20fb70dc] <==
	* 2021-08-11 01:01:17.332899 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 01:01:27.333649 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 01:01:37.333163 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 01:01:47.333074 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 01:01:57.333558 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 01:02:07.333606 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 01:02:17.333458 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 01:02:27.333453 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 01:02:37.333319 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 01:02:47.332909 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 01:02:57.333242 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 01:03:07.333610 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 01:03:17.332988 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 01:03:27.333414 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 01:03:37.333633 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 01:03:38.190399 I | mvcc: store.index: compact 860
	2021-08-11 01:03:38.191626 I | mvcc: finished scheduled compaction at 860 (took 900.376µs)
	2021-08-11 01:03:47.333391 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 01:03:57.333235 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 01:04:07.333247 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 01:04:17.333255 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 01:04:27.333756 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 01:04:37.333644 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 01:04:47.333119 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 01:04:57.333047 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  01:05:06 up 10:47,  0 users,  load average: 0.22, 0.50, 1.07
	Linux multinode-20210811005307-1387367 5.8.0-1041-aws #43~20.04.1-Ubuntu SMP Thu Jul 15 11:03:27 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [bfe8629569cc] <==
	* I0811 00:59:49.371119       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0811 01:00:24.886906       1 client.go:360] parsed scheme: "passthrough"
	I0811 01:00:24.886972       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0811 01:00:24.886983       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0811 01:01:01.929336       1 client.go:360] parsed scheme: "passthrough"
	I0811 01:01:01.929387       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0811 01:01:01.929396       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0811 01:01:34.564430       1 client.go:360] parsed scheme: "passthrough"
	I0811 01:01:34.564478       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0811 01:01:34.564488       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0811 01:02:17.163381       1 client.go:360] parsed scheme: "passthrough"
	I0811 01:02:17.163600       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0811 01:02:17.163680       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0811 01:03:01.989553       1 client.go:360] parsed scheme: "passthrough"
	I0811 01:03:01.989606       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0811 01:03:01.989616       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0811 01:03:37.681059       1 client.go:360] parsed scheme: "passthrough"
	I0811 01:03:37.681107       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0811 01:03:37.681116       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0811 01:04:22.572725       1 client.go:360] parsed scheme: "passthrough"
	I0811 01:04:22.572935       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0811 01:04:22.572957       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0811 01:05:04.824051       1 client.go:360] parsed scheme: "passthrough"
	I0811 01:05:04.824098       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0811 01:05:04.824353       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-controller-manager [1c09dc0ad10e] <==
	* I0811 00:53:59.902929       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0811 00:53:59.911433       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0811 00:54:00.234119       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-558bd4d5db to 2"
	I0811 00:54:00.349567       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0811 00:54:00.413791       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-558bd4d5db to 1"
	I0811 00:54:00.417240       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0811 00:54:00.417308       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0811 00:54:00.633507       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-sjx8s"
	I0811 00:54:00.633542       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-xqj59"
	I0811 00:54:00.684580       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-ckrfd"
	I0811 00:54:00.696172       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-lpxc6"
	I0811 00:54:00.741742       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-558bd4d5db-ckrfd"
	I0811 00:54:24.605204       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	W0811 00:54:49.643184       1 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-20210811005307-1387367-m02" does not exist
	I0811 00:54:49.670595       1 range_allocator.go:373] Set node multinode-20210811005307-1387367-m02 PodCIDR to [10.244.1.0/24]
	I0811 00:54:49.689655       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-bsbng"
	I0811 00:54:49.691241       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-29jgc"
	E0811 00:54:49.730169       1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"77b04802-67f5-4c63-bfa7-7aafef47aa03", ResourceVersion:"491", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764240028, loc:(*time.Location)(0x6704c20)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"k
indnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"kindest/kindnetd:v20210326-1e038dc5\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists
\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.mk\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x40021174d0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x40021174e8)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4002117500), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4002117518)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x40014bc940), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, Crea
tionTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4002117530), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.Flex
VolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4002117548), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVo
lumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CS
IVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4002117560), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*
v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"kindest/kindnetd:v20210326-1e038dc5", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x40014bc960)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x40014bc9a0)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amou
nt{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropa
gation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4002139140), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x400214a708), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4000174690), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(ni
l), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x400214d5a0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x400214a750)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:1, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:1, ObservedGeneration:1, UpdatedNumberScheduled:1, NumberAvailable:1, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetConditio
n(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	E0811 00:54:49.754326       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"37aa45af-7498-4003-abc1-af1fe65a80b1", ResourceVersion:"483", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764240027, loc:(*time.Location)(0x6704c20)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4002007ba8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4002007bc0)}, v1.ManagedFieldsEntry{Manager:"kube-co
ntroller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4002007bd8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4002007bf0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x40013b8460), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElastic
BlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x40020cbc40), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSour
ce)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4002007c08), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSo
urce)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4002007c20), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil),
Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.21.3", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil),
WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x40013b84a0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"F
ile", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4002188960), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x40020db9f8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4000176460), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)
(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x40021a0040)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x40020dba48)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:1, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:1, ObservedGeneration:1, UpdatedNumberScheduled:1, NumberAvailable:1, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest ve
rsion and try again
	W0811 00:54:54.608706       1 node_lifecycle_controller.go:1013] Missing timestamp for Node multinode-20210811005307-1387367-m02. Assuming now as a timestamp.
	I0811 00:54:54.609043       1 event.go:291] "Event occurred" object="multinode-20210811005307-1387367-m02" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-20210811005307-1387367-m02 event: Registered Node multinode-20210811005307-1387367-m02 in Controller"
	I0811 00:55:02.961045       1 event.go:291] "Event occurred" object="default/busybox" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-84b6686758 to 2"
	I0811 00:55:02.978990       1 event.go:291] "Event occurred" object="default/busybox-84b6686758" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-84b6686758-c9mqs"
	I0811 00:55:03.005311       1 event.go:291] "Event occurred" object="default/busybox-84b6686758" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-84b6686758-2jxsd"
	I0811 00:55:04.623493       1 event.go:291] "Event occurred" object="default/busybox-84b6686758-c9mqs" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-84b6686758-c9mqs"
	
	* 
	* ==> kube-proxy [e36db61f87b4] <==
	* I0811 00:54:03.259878       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0811 00:54:03.259951       1 server_others.go:140] Detected node IP 192.168.49.2
	W0811 00:54:03.259984       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0811 00:54:03.291493       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0811 00:54:03.292302       1 server_others.go:212] Using iptables Proxier.
	I0811 00:54:03.292332       1 server_others.go:219] creating dualStackProxier for iptables.
	W0811 00:54:03.292344       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0811 00:54:03.292836       1 server.go:643] Version: v1.21.3
	I0811 00:54:03.294255       1 config.go:315] Starting service config controller
	I0811 00:54:03.294285       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0811 00:54:03.294364       1 config.go:224] Starting endpoint slice config controller
	I0811 00:54:03.294378       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0811 00:54:03.307553       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0811 00:54:03.310029       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0811 00:54:03.394503       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0811 00:54:03.394677       1 shared_informer.go:247] Caches are synced for service config 
	W0811 01:00:34.311069       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	
	* 
	* ==> kube-scheduler [9e13d13bead3] <==
	* W0811 00:53:44.289970       1 authentication.go:337] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0811 00:53:44.290047       1 authentication.go:338] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0811 00:53:44.290103       1 authentication.go:339] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0811 00:53:44.370326       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0811 00:53:44.370404       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0811 00:53:44.370410       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0811 00:53:44.370422       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0811 00:53:44.383954       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0811 00:53:44.384202       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0811 00:53:44.384370       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0811 00:53:44.384546       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0811 00:53:44.384722       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0811 00:53:44.384882       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0811 00:53:44.385053       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0811 00:53:44.385235       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0811 00:53:44.390020       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0811 00:53:44.390179       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0811 00:53:44.390349       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0811 00:53:44.390461       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0811 00:53:44.390510       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0811 00:53:44.405168       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0811 00:53:45.204983       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0811 00:53:45.283915       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0811 00:53:45.422581       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0811 00:53:45.970994       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2021-08-11 00:53:09 UTC, end at Wed 2021-08-11 01:05:06 UTC. --
	Aug 11 01:00:27 multinode-20210811005307-1387367 kubelet[2325]: E0811 01:00:27.043766    2325 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:1.28\\\"\"" pod="default/busybox-84b6686758-2jxsd" podUID=38a2c1c7-063c-4e65-9056-8e76fd707dd5
	Aug 11 01:00:44 multinode-20210811005307-1387367 kubelet[2325]: E0811 01:00:44.725721    2325 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="busybox:1.28"
	Aug 11 01:00:44 multinode-20210811005307-1387367 kubelet[2325]: E0811 01:00:44.725768    2325 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="busybox:1.28"
	Aug 11 01:00:44 multinode-20210811005307-1387367 kubelet[2325]: E0811 01:00:44.725871    2325 kuberuntime_manager.go:864] container &Container{Name:busybox,Image:busybox:1.28,Command:[sleep 3600],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m5zfg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod busybox-84b6686758-2jxsd_default(38a2c1c7-063c-4e65-9056-8e76fd707dd5): ErrImagePull: rpc error: code = Unknown desc = toomanyrequests: You have reached your pull rate limit. You may increas
e the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Aug 11 01:00:44 multinode-20210811005307-1387367 kubelet[2325]: E0811 01:00:44.726221    2325 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ErrImagePull: \"rpc error: code = Unknown desc = toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/busybox-84b6686758-2jxsd" podUID=38a2c1c7-063c-4e65-9056-8e76fd707dd5
	Aug 11 01:00:59 multinode-20210811005307-1387367 kubelet[2325]: E0811 01:00:59.044632    2325 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:1.28\\\"\"" pod="default/busybox-84b6686758-2jxsd" podUID=38a2c1c7-063c-4e65-9056-8e76fd707dd5
	Aug 11 01:01:13 multinode-20210811005307-1387367 kubelet[2325]: E0811 01:01:13.044560    2325 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:1.28\\\"\"" pod="default/busybox-84b6686758-2jxsd" podUID=38a2c1c7-063c-4e65-9056-8e76fd707dd5
	Aug 11 01:01:24 multinode-20210811005307-1387367 kubelet[2325]: E0811 01:01:24.044964    2325 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:1.28\\\"\"" pod="default/busybox-84b6686758-2jxsd" podUID=38a2c1c7-063c-4e65-9056-8e76fd707dd5
	Aug 11 01:01:36 multinode-20210811005307-1387367 kubelet[2325]: E0811 01:01:36.044645    2325 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:1.28\\\"\"" pod="default/busybox-84b6686758-2jxsd" podUID=38a2c1c7-063c-4e65-9056-8e76fd707dd5
	Aug 11 01:01:50 multinode-20210811005307-1387367 kubelet[2325]: E0811 01:01:50.044666    2325 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:1.28\\\"\"" pod="default/busybox-84b6686758-2jxsd" podUID=38a2c1c7-063c-4e65-9056-8e76fd707dd5
	Aug 11 01:02:04 multinode-20210811005307-1387367 kubelet[2325]: E0811 01:02:04.044171    2325 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:1.28\\\"\"" pod="default/busybox-84b6686758-2jxsd" podUID=38a2c1c7-063c-4e65-9056-8e76fd707dd5
	Aug 11 01:02:19 multinode-20210811005307-1387367 kubelet[2325]: E0811 01:02:19.044201    2325 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:1.28\\\"\"" pod="default/busybox-84b6686758-2jxsd" podUID=38a2c1c7-063c-4e65-9056-8e76fd707dd5
	Aug 11 01:02:30 multinode-20210811005307-1387367 kubelet[2325]: E0811 01:02:30.044527    2325 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:1.28\\\"\"" pod="default/busybox-84b6686758-2jxsd" podUID=38a2c1c7-063c-4e65-9056-8e76fd707dd5
	Aug 11 01:02:41 multinode-20210811005307-1387367 kubelet[2325]: E0811 01:02:41.044532    2325 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:1.28\\\"\"" pod="default/busybox-84b6686758-2jxsd" podUID=38a2c1c7-063c-4e65-9056-8e76fd707dd5
	Aug 11 01:02:52 multinode-20210811005307-1387367 kubelet[2325]: E0811 01:02:52.044668    2325 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:1.28\\\"\"" pod="default/busybox-84b6686758-2jxsd" podUID=38a2c1c7-063c-4e65-9056-8e76fd707dd5
	Aug 11 01:03:04 multinode-20210811005307-1387367 kubelet[2325]: E0811 01:03:04.044598    2325 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:1.28\\\"\"" pod="default/busybox-84b6686758-2jxsd" podUID=38a2c1c7-063c-4e65-9056-8e76fd707dd5
	Aug 11 01:03:17 multinode-20210811005307-1387367 kubelet[2325]: E0811 01:03:17.045070    2325 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:1.28\\\"\"" pod="default/busybox-84b6686758-2jxsd" podUID=38a2c1c7-063c-4e65-9056-8e76fd707dd5
	Aug 11 01:03:28 multinode-20210811005307-1387367 kubelet[2325]: E0811 01:03:28.052305    2325 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:1.28\\\"\"" pod="default/busybox-84b6686758-2jxsd" podUID=38a2c1c7-063c-4e65-9056-8e76fd707dd5
	Aug 11 01:03:43 multinode-20210811005307-1387367 kubelet[2325]: E0811 01:03:43.044745    2325 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:1.28\\\"\"" pod="default/busybox-84b6686758-2jxsd" podUID=38a2c1c7-063c-4e65-9056-8e76fd707dd5
	Aug 11 01:03:56 multinode-20210811005307-1387367 kubelet[2325]: E0811 01:03:56.044691    2325 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:1.28\\\"\"" pod="default/busybox-84b6686758-2jxsd" podUID=38a2c1c7-063c-4e65-9056-8e76fd707dd5
	Aug 11 01:04:09 multinode-20210811005307-1387367 kubelet[2325]: E0811 01:04:09.044891    2325 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:1.28\\\"\"" pod="default/busybox-84b6686758-2jxsd" podUID=38a2c1c7-063c-4e65-9056-8e76fd707dd5
	Aug 11 01:04:24 multinode-20210811005307-1387367 kubelet[2325]: E0811 01:04:24.044752    2325 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:1.28\\\"\"" pod="default/busybox-84b6686758-2jxsd" podUID=38a2c1c7-063c-4e65-9056-8e76fd707dd5
	Aug 11 01:04:38 multinode-20210811005307-1387367 kubelet[2325]: E0811 01:04:38.044534    2325 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:1.28\\\"\"" pod="default/busybox-84b6686758-2jxsd" podUID=38a2c1c7-063c-4e65-9056-8e76fd707dd5
	Aug 11 01:04:50 multinode-20210811005307-1387367 kubelet[2325]: E0811 01:04:50.044483    2325 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:1.28\\\"\"" pod="default/busybox-84b6686758-2jxsd" podUID=38a2c1c7-063c-4e65-9056-8e76fd707dd5
	Aug 11 01:05:01 multinode-20210811005307-1387367 kubelet[2325]: E0811 01:05:01.044558    2325 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:1.28\\\"\"" pod="default/busybox-84b6686758-2jxsd" podUID=38a2c1c7-063c-4e65-9056-8e76fd707dd5
	
	* 
	* ==> storage-provisioner [e7872a86c850] <==
	* I0811 00:54:27.010627       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0811 00:54:27.028649       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0811 00:54:27.028695       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0811 00:54:27.056687       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0811 00:54:27.056856       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-20210811005307-1387367_ef6970d5-57b3-408b-8903-f5d4b1b25dac!
	I0811 00:54:27.057780       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"dfb4bebd-3736-4b46-8595-d59a34df22f5", APIVersion:"v1", ResourceVersion:"510", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-20210811005307-1387367_ef6970d5-57b3-408b-8903-f5d4b1b25dac became leader
	I0811 00:54:27.157477       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-20210811005307-1387367_ef6970d5-57b3-408b-8903-f5d4b1b25dac!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p multinode-20210811005307-1387367 -n multinode-20210811005307-1387367
helpers_test.go:262: (dbg) Run:  kubectl --context multinode-20210811005307-1387367 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:268: non-running pods: busybox-84b6686758-2jxsd busybox-84b6686758-c9mqs
helpers_test.go:270: ======> post-mortem[TestMultiNode/serial/DeployApp2Nodes]: describe non-running pods <======
helpers_test.go:273: (dbg) Run:  kubectl --context multinode-20210811005307-1387367 describe pod busybox-84b6686758-2jxsd busybox-84b6686758-c9mqs
helpers_test.go:278: (dbg) kubectl --context multinode-20210811005307-1387367 describe pod busybox-84b6686758-2jxsd busybox-84b6686758-c9mqs:

                                                
                                                
-- stdout --
	Name:         busybox-84b6686758-2jxsd
	Namespace:    default
	Priority:     0
	Node:         multinode-20210811005307-1387367/192.168.49.2
	Start Time:   Wed, 11 Aug 2021 00:55:03 +0000
	Labels:       app=busybox
	              pod-template-hash=84b6686758
	Annotations:  <none>
	Status:       Pending
	IP:           10.244.0.3
	IPs:
	  IP:           10.244.0.3
	Controlled By:  ReplicaSet/busybox-84b6686758
	Containers:
	  busybox:
	    Container ID:  
	    Image:         busybox:1.28
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-m5zfg (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  kube-api-access-m5zfg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/busybox-84b6686758-2jxsd to multinode-20210811005307-1387367
	  Warning  Failed     9m23s (x3 over 10m)   kubelet            Failed to pull image "busybox:1.28": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   Pulling    8m40s (x4 over 10m)   kubelet            Pulling image "busybox:1.28"
	  Warning  Failed     8m40s (x4 over 10m)   kubelet            Error: ErrImagePull
	  Warning  Failed     8m40s                 kubelet            Failed to pull image "busybox:1.28": rpc error: code = Unknown desc = toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     8m13s (x6 over 10m)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m52s (x21 over 10m)  kubelet            Back-off pulling image "busybox:1.28"
	
	
	Name:         busybox-84b6686758-c9mqs
	Namespace:    default
	Priority:     0
	Node:         multinode-20210811005307-1387367-m02/192.168.49.3
	Start Time:   Wed, 11 Aug 2021 00:55:02 +0000
	Labels:       app=busybox
	              pod-template-hash=84b6686758
	Annotations:  <none>
	Status:       Pending
	IP:           10.244.1.2
	IPs:
	  IP:           10.244.1.2
	Controlled By:  ReplicaSet/busybox-84b6686758
	Containers:
	  busybox:
	    Container ID:  
	    Image:         busybox:1.28
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fqvdr (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  kube-api-access-fqvdr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/busybox-84b6686758-c9mqs to multinode-20210811005307-1387367-m02
	  Normal   Pulling    8m33s (x4 over 10m)   kubelet            Pulling image "busybox:1.28"
	  Warning  Failed     8m33s (x4 over 10m)   kubelet            Failed to pull image "busybox:1.28": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     8m33s (x4 over 10m)   kubelet            Error: ErrImagePull
	  Warning  Failed     8m20s (x6 over 10m)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m54s (x21 over 10m)  kubelet            Back-off pulling image "busybox:1.28"

                                                
                                                
-- /stdout --
helpers_test.go:281: <<< TestMultiNode/serial/DeployApp2Nodes FAILED: end of post-mortem logs <<<
helpers_test.go:282: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (605.27s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-20210811005307-1387367 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:529: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-20210811005307-1387367 -- exec busybox-84b6686758-2jxsd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:529: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-20210811005307-1387367 -- exec busybox-84b6686758-2jxsd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3": exit status 1 (182.606841ms)

                                                
                                                
** stderr ** 
	error: unable to upgrade connection: container not found ("busybox")

                                                
                                                
** /stderr **
multinode_test.go:531: Pod busybox-84b6686758-2jxsd could not resolve 'host.minikube.internal': exit status 1
multinode_test.go:529: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-20210811005307-1387367 -- exec busybox-84b6686758-c9mqs -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:529: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-20210811005307-1387367 -- exec busybox-84b6686758-c9mqs -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3": exit status 1 (194.69401ms)

                                                
                                                
** stderr ** 
	error: unable to upgrade connection: container not found ("busybox")

                                                
                                                
** /stderr **
multinode_test.go:531: Pod busybox-84b6686758-c9mqs could not resolve 'host.minikube.internal': exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect multinode-20210811005307-1387367
helpers_test.go:236: (dbg) docker inspect multinode-20210811005307-1387367:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "549bdb3bf1ad8cdc8e3637fd16790e323f3f10545c1a724499bbf68b3777220a",
	        "Created": "2021-08-11T00:53:08.554271158Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1439761,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-11T00:53:09.047827202Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ba5ae658d5b3f017bdb597cc46a1912d5eed54239e31b777788d204fdcbc4445",
	        "ResolvConfPath": "/var/lib/docker/containers/549bdb3bf1ad8cdc8e3637fd16790e323f3f10545c1a724499bbf68b3777220a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/549bdb3bf1ad8cdc8e3637fd16790e323f3f10545c1a724499bbf68b3777220a/hostname",
	        "HostsPath": "/var/lib/docker/containers/549bdb3bf1ad8cdc8e3637fd16790e323f3f10545c1a724499bbf68b3777220a/hosts",
	        "LogPath": "/var/lib/docker/containers/549bdb3bf1ad8cdc8e3637fd16790e323f3f10545c1a724499bbf68b3777220a/549bdb3bf1ad8cdc8e3637fd16790e323f3f10545c1a724499bbf68b3777220a-json.log",
	        "Name": "/multinode-20210811005307-1387367",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-20210811005307-1387367:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "multinode-20210811005307-1387367",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/21e583d79e3b146292577b4d05f8d8526f1323507981f139d59a588539c6191b-init/diff:/var/lib/docker/overlay2/b901673749d4c23cf617379d66c43acbc184f898f580a05fca5568725e6ccb6a/diff:/var/lib/docker/overlay2/3fd19ee2c9d46b2cdb8a592d42d57d9efdba3a556c98f5018ae07caa15606bc4/diff:/var/lib/docker/overlay2/31f547e426e6dfa6ed65e0b7cb851c18e771f23a77868552685aacb2e126dc0a/diff:/var/lib/docker/overlay2/6ae53b304b800757235653c63c7879ae7f05b4d4f0400f7f6fadc53e2059aa5a/diff:/var/lib/docker/overlay2/7702d6ed068e8b454dd11af18cb8cb76986898926e3e3130c2d7f638062de9ee/diff:/var/lib/docker/overlay2/e67b0ce82f4d6c092698530106fa38495aa54b2fe5600ac022386a3d17165948/diff:/var/lib/docker/overlay2/d3ddbdbbe88f3c5a0867637eeb78a22790daa833a6179cdd4690044007911336/diff:/var/lib/docker/overlay2/10c48536a5187dfe63f1c090ec32daef76e852de7cc4a7e7f96a2fa1510314cc/diff:/var/lib/docker/overlay2/2186c26bc131feb045ca64a28e2cc431fed76b32afc3d3587916b98a9af807fe/diff:/var/lib/docker/overlay2/292c9d
aaf6d60ee235c7ac65bfc1b61b9c0d360ebbebcf08ba5efeb1b40de075/diff:/var/lib/docker/overlay2/9bc521e84afeeb62fa312e9eb2afc367bc449dbf66f412e17eb2338f79d6f920/diff:/var/lib/docker/overlay2/b1a93cf97438f068af56026fc52aaa329c46e4cac3d8f91c8d692871adaf451a/diff:/var/lib/docker/overlay2/b8e42d5d9e69e72a11e3cad660b9f29335dfc6cd1b4a6aebdbf5e6f313efe749/diff:/var/lib/docker/overlay2/6a6eaef3ce06d941ce606aaebc530878ce54d24a51c7947ca936a3a6eb4dac16/diff:/var/lib/docker/overlay2/62370bd2a6e35ce796647f79ccf9906147c91e8ceee31e401bdb7842371c6bee/diff:/var/lib/docker/overlay2/e673dacc1c6815100340b85af47aeb90eb5fca87778caec1d728de5b8cc9a36e/diff:/var/lib/docker/overlay2/bd17ea1d8cd8e2f88bd7fb4cee8a097365f6b81efc91f203a0504873fc0916a6/diff:/var/lib/docker/overlay2/d2f15007a2a5c037903647e5dd0d6882903fa163d23087bbd8eadeaf3618377b/diff:/var/lib/docker/overlay2/0bbc7fe1b1d62a2db9b4f402e6bc8781815951ae6df608307fd50a2fde242253/diff:/var/lib/docker/overlay2/d124fa0a0ea67ad0362eec0adf1f3e7cbd885b2cf4c31f83e917d97a09a791af/diff:/var/lib/d
ocker/overlay2/ee74e2f91490ecb544a95b306f1001046f3c4656413878d09be8bf67de7b4c4f/diff:/var/lib/docker/overlay2/4279b3790ea6aeb262c4ecd9cf4aae5beb1430f4fbb599b49ff27d0f7b3a9714/diff:/var/lib/docker/overlay2/b7fd6a0c88249dbf5e233463fbe08559ca287465617e7721977a002204ea3af5/diff:/var/lib/docker/overlay2/c495a83eeda1cf6df33d49341ee01f15738845e6330c0a5b3c29e11fdc4733b0/diff:/var/lib/docker/overlay2/ac747f0260d49943953568bbbe150f3a4f28d70bd82f40d0485ef13b12195044/diff:/var/lib/docker/overlay2/aa98d62ac831ecd60bc1acfa1708c0648c306bb7fa187026b472e9ae5c3364a4/diff:/var/lib/docker/overlay2/34829b132a53df856a1be03aa46565640e20cb075db18bd9775a5055fe0c0b22/diff:/var/lib/docker/overlay2/85a074fe6f79f3ea9d8b2f628355f41bb4f73b398257f8b6659bc171d86a0736/diff:/var/lib/docker/overlay2/c8c145d2e68e655880cd5c8fae8cb9f7cbd6b112f1f64fced224b17d4f60fbc7/diff:/var/lib/docker/overlay2/7480ad16aa2479be3569dd07eca685bc3a37a785e7ff281c448c7ca718cc67c3/diff:/var/lib/docker/overlay2/519f1304b1b8ee2daf8c1b9411f3e46d4fedacc8d6446937321372c4e8d
f2cb9/diff:/var/lib/docker/overlay2/246fcb20bef1dbfdc41186d1b7143566cd571a067830cc3f946b232024c2e85c/diff:/var/lib/docker/overlay2/f5f15e6d497abc56d9a2d901ed821a56e6f3effe2fc8d6c3ef64297faea15179/diff:/var/lib/docker/overlay2/3aa1fb1105e860c53ef63317f6757f9629a4a20f35764d976df2b0f0cee5d4f2/diff:/var/lib/docker/overlay2/765f7cba41acbb266d2cef89f2a76a5659b78c3b075223bf23257ac44acfe177/diff:/var/lib/docker/overlay2/53179410fe05d9ddea0a22ba2c123ca8e75f9c7839c2a64902e411e2bda2de23/diff",
	                "MergedDir": "/var/lib/docker/overlay2/21e583d79e3b146292577b4d05f8d8526f1323507981f139d59a588539c6191b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/21e583d79e3b146292577b4d05f8d8526f1323507981f139d59a588539c6191b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/21e583d79e3b146292577b4d05f8d8526f1323507981f139d59a588539c6191b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-20210811005307-1387367",
	                "Source": "/var/lib/docker/volumes/multinode-20210811005307-1387367/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-20210811005307-1387367",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-20210811005307-1387367",
	                "name.minikube.sigs.k8s.io": "multinode-20210811005307-1387367",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e2f2f630f0d3756864343a7222d7c068ec558656959e55017181b93ce3089a53",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50285"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50284"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50281"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50283"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50282"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/e2f2f630f0d3",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-20210811005307-1387367": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "549bdb3bf1ad",
	                        "multinode-20210811005307-1387367"
	                    ],
	                    "NetworkID": "895f73080075bf95fc7bbf77ee83def6add633e6a908afc47428f4d25c69cb31",
	                    "EndpointID": "bca199b2a8b1e88644cd3d2f5b90ac6963d4c7d7de35a9a19e7c399d1b37a8b6",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p multinode-20210811005307-1387367 -n multinode-20210811005307-1387367
helpers_test.go:245: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210811005307-1387367 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-20210811005307-1387367 logs -n 25: (1.36407616s)
helpers_test.go:253: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------------------------------|----------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                 Profile                  |   User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|------------------------------------------|----------|---------|-------------------------------|-------------------------------|
	| -p      | functional-20210811004603-1387367                 | functional-20210811004603-1387367        | jenkins  | v1.22.0 | Wed, 11 Aug 2021 00:48:43 UTC | Wed, 11 Aug 2021 00:48:43 UTC |
	|         | ssh sudo cat                                      |                                          |          |         |                               |                               |
	|         | /usr/share/ca-certificates/13873672.pem           |                                          |          |         |                               |                               |
	| -p      | functional-20210811004603-1387367                 | functional-20210811004603-1387367        | jenkins  | v1.22.0 | Wed, 11 Aug 2021 00:48:42 UTC | Wed, 11 Aug 2021 00:48:43 UTC |
	|         | version -o=json --components                      |                                          |          |         |                               |                               |
	| -p      | functional-20210811004603-1387367                 | functional-20210811004603-1387367        | jenkins  | v1.22.0 | Wed, 11 Aug 2021 00:48:43 UTC | Wed, 11 Aug 2021 00:48:44 UTC |
	|         | update-context --alsologtostderr                  |                                          |          |         |                               |                               |
	|         | -v=2                                              |                                          |          |         |                               |                               |
	| -p      | functional-20210811004603-1387367                 | functional-20210811004603-1387367        | jenkins  | v1.22.0 | Wed, 11 Aug 2021 00:48:43 UTC | Wed, 11 Aug 2021 00:48:44 UTC |
	|         | ssh sudo cat                                      |                                          |          |         |                               |                               |
	|         | /etc/ssl/certs/3ec20f2e.0                         |                                          |          |         |                               |                               |
	| -p      | functional-20210811004603-1387367                 | functional-20210811004603-1387367        | jenkins  | v1.22.0 | Wed, 11 Aug 2021 00:48:44 UTC | Wed, 11 Aug 2021 00:48:44 UTC |
	|         | update-context --alsologtostderr                  |                                          |          |         |                               |                               |
	|         | -v=2                                              |                                          |          |         |                               |                               |
	| -p      | functional-20210811004603-1387367                 | functional-20210811004603-1387367        | jenkins  | v1.22.0 | Wed, 11 Aug 2021 00:48:44 UTC | Wed, 11 Aug 2021 00:48:44 UTC |
	|         | update-context --alsologtostderr                  |                                          |          |         |                               |                               |
	|         | -v=2                                              |                                          |          |         |                               |                               |
	| delete  | -p                                                | functional-20210811004603-1387367        | jenkins  | v1.22.0 | Wed, 11 Aug 2021 00:48:44 UTC | Wed, 11 Aug 2021 00:48:47 UTC |
	|         | functional-20210811004603-1387367                 |                                          |          |         |                               |                               |
	| start   | -p                                                | json-output-20210811004847-1387367       | testUser | v1.22.0 | Wed, 11 Aug 2021 00:48:47 UTC | Wed, 11 Aug 2021 00:50:31 UTC |
	|         | json-output-20210811004847-1387367                |                                          |          |         |                               |                               |
	|         | --output=json --user=testUser                     |                                          |          |         |                               |                               |
	|         | --memory=2200 --wait=true                         |                                          |          |         |                               |                               |
	|         | --driver=docker                                   |                                          |          |         |                               |                               |
	|         | --container-runtime=docker                        |                                          |          |         |                               |                               |
	| pause   | -p                                                | json-output-20210811004847-1387367       | testUser | v1.22.0 | Wed, 11 Aug 2021 00:50:31 UTC | Wed, 11 Aug 2021 00:50:32 UTC |
	|         | json-output-20210811004847-1387367                |                                          |          |         |                               |                               |
	|         | --output=json --user=testUser                     |                                          |          |         |                               |                               |
	| unpause | -p                                                | json-output-20210811004847-1387367       | testUser | v1.22.0 | Wed, 11 Aug 2021 00:50:32 UTC | Wed, 11 Aug 2021 00:50:32 UTC |
	|         | json-output-20210811004847-1387367                |                                          |          |         |                               |                               |
	|         | --output=json --user=testUser                     |                                          |          |         |                               |                               |
	| stop    | -p                                                | json-output-20210811004847-1387367       | testUser | v1.22.0 | Wed, 11 Aug 2021 00:50:32 UTC | Wed, 11 Aug 2021 00:50:43 UTC |
	|         | json-output-20210811004847-1387367                |                                          |          |         |                               |                               |
	|         | --output=json --user=testUser                     |                                          |          |         |                               |                               |
	| delete  | -p                                                | json-output-20210811004847-1387367       | jenkins  | v1.22.0 | Wed, 11 Aug 2021 00:50:43 UTC | Wed, 11 Aug 2021 00:50:45 UTC |
	|         | json-output-20210811004847-1387367                |                                          |          |         |                               |                               |
	| delete  | -p                                                | json-output-error-20210811005045-1387367 | jenkins  | v1.22.0 | Wed, 11 Aug 2021 00:50:45 UTC | Wed, 11 Aug 2021 00:50:46 UTC |
	|         | json-output-error-20210811005045-1387367          |                                          |          |         |                               |                               |
	| start   | -p                                                | docker-network-20210811005046-1387367    | jenkins  | v1.22.0 | Wed, 11 Aug 2021 00:50:46 UTC | Wed, 11 Aug 2021 00:51:28 UTC |
	|         | docker-network-20210811005046-1387367             |                                          |          |         |                               |                               |
	|         | --network=                                        |                                          |          |         |                               |                               |
	| delete  | -p                                                | docker-network-20210811005046-1387367    | jenkins  | v1.22.0 | Wed, 11 Aug 2021 00:51:28 UTC | Wed, 11 Aug 2021 00:51:30 UTC |
	|         | docker-network-20210811005046-1387367             |                                          |          |         |                               |                               |
	| start   | -p                                                | docker-network-20210811005130-1387367    | jenkins  | v1.22.0 | Wed, 11 Aug 2021 00:51:30 UTC | Wed, 11 Aug 2021 00:52:16 UTC |
	|         | docker-network-20210811005130-1387367             |                                          |          |         |                               |                               |
	|         | --network=bridge                                  |                                          |          |         |                               |                               |
	| delete  | -p                                                | docker-network-20210811005130-1387367    | jenkins  | v1.22.0 | Wed, 11 Aug 2021 00:52:16 UTC | Wed, 11 Aug 2021 00:52:19 UTC |
	|         | docker-network-20210811005130-1387367             |                                          |          |         |                               |                               |
	| start   | -p                                                | existing-network-20210811005219-1387367  | jenkins  | v1.22.0 | Wed, 11 Aug 2021 00:52:19 UTC | Wed, 11 Aug 2021 00:53:04 UTC |
	|         | existing-network-20210811005219-1387367           |                                          |          |         |                               |                               |
	|         | --network=existing-network                        |                                          |          |         |                               |                               |
	| delete  | -p                                                | existing-network-20210811005219-1387367  | jenkins  | v1.22.0 | Wed, 11 Aug 2021 00:53:04 UTC | Wed, 11 Aug 2021 00:53:07 UTC |
	|         | existing-network-20210811005219-1387367           |                                          |          |         |                               |                               |
	| start   | -p                                                | multinode-20210811005307-1387367         | jenkins  | v1.22.0 | Wed, 11 Aug 2021 00:53:07 UTC | Wed, 11 Aug 2021 00:55:01 UTC |
	|         | multinode-20210811005307-1387367                  |                                          |          |         |                               |                               |
	|         | --wait=true --memory=2200                         |                                          |          |         |                               |                               |
	|         | --nodes=2 -v=8 --alsologtostderr                  |                                          |          |         |                               |                               |
	|         | --driver=docker                                   |                                          |          |         |                               |                               |
	|         | --container-runtime=docker                        |                                          |          |         |                               |                               |
	| kubectl | -p multinode-20210811005307-1387367 -- apply -f   | multinode-20210811005307-1387367         | jenkins  | v1.22.0 | Wed, 11 Aug 2021 00:55:02 UTC | Wed, 11 Aug 2021 00:55:02 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                                          |          |         |                               |                               |
	| kubectl | -p multinode-20210811005307-1387367               | multinode-20210811005307-1387367         | jenkins  | v1.22.0 | Wed, 11 Aug 2021 01:05:03 UTC | Wed, 11 Aug 2021 01:05:03 UTC |
	|         | -- get pods -o                                    |                                          |          |         |                               |                               |
	|         | jsonpath='{.items[*].status.podIP}'               |                                          |          |         |                               |                               |
	| kubectl | -p multinode-20210811005307-1387367               | multinode-20210811005307-1387367         | jenkins  | v1.22.0 | Wed, 11 Aug 2021 01:05:03 UTC | Wed, 11 Aug 2021 01:05:03 UTC |
	|         | -- get pods -o                                    |                                          |          |         |                               |                               |
	|         | jsonpath='{.items[*].metadata.name}'              |                                          |          |         |                               |                               |
	| -p      | multinode-20210811005307-1387367                  | multinode-20210811005307-1387367         | jenkins  | v1.22.0 | Wed, 11 Aug 2021 01:05:05 UTC | Wed, 11 Aug 2021 01:05:06 UTC |
	|         | logs -n 25                                        |                                          |          |         |                               |                               |
	| kubectl | -p multinode-20210811005307-1387367               | multinode-20210811005307-1387367         | jenkins  | v1.22.0 | Wed, 11 Aug 2021 01:05:07 UTC | Wed, 11 Aug 2021 01:05:07 UTC |
	|         | -- get pods -o                                    |                                          |          |         |                               |                               |
	|         | jsonpath='{.items[*].metadata.name}'              |                                          |          |         |                               |                               |
	|---------|---------------------------------------------------|------------------------------------------|----------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/11 00:53:07
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.16.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0811 00:53:07.230893 1439337 out.go:298] Setting OutFile to fd 1 ...
	I0811 00:53:07.231024 1439337 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0811 00:53:07.231034 1439337 out.go:311] Setting ErrFile to fd 2...
	I0811 00:53:07.231038 1439337 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0811 00:53:07.231170 1439337 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/bin
	I0811 00:53:07.231476 1439337 out.go:305] Setting JSON to false
	I0811 00:53:07.232592 1439337 start.go:111] hostinfo: {"hostname":"ip-172-31-31-251","uptime":38134,"bootTime":1628605053,"procs":472,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0811 00:53:07.232678 1439337 start.go:121] virtualization:  
	I0811 00:53:07.235645 1439337 out.go:177] * [multinode-20210811005307-1387367] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	I0811 00:53:07.238230 1439337 out.go:177]   - MINIKUBE_LOCATION=12230
	I0811 00:53:07.236971 1439337 notify.go:169] Checking for updates...
	I0811 00:53:07.240187 1439337 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	I0811 00:53:07.242264 1439337 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube
	I0811 00:53:07.244351 1439337 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0811 00:53:07.244637 1439337 driver.go:335] Setting default libvirt URI to qemu:///system
	I0811 00:53:07.285517 1439337 docker.go:132] docker version: linux-20.10.8
	I0811 00:53:07.285613 1439337 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0811 00:53:07.395966 1439337 info.go:263] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:46 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:34 SystemTime:2021-08-11 00:53:07.336423317 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226263040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0811 00:53:07.396112 1439337 docker.go:244] overlay module found
	I0811 00:53:07.398556 1439337 out.go:177] * Using the docker driver based on user configuration
	I0811 00:53:07.398593 1439337 start.go:278] selected driver: docker
	I0811 00:53:07.398600 1439337 start.go:751] validating driver "docker" against <nil>
	I0811 00:53:07.398619 1439337 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0811 00:53:07.398679 1439337 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0811 00:53:07.398697 1439337 out.go:242] ! Your cgroup does not allow setting memory.
	I0811 00:53:07.401034 1439337 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0811 00:53:07.401409 1439337 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0811 00:53:07.487345 1439337 info.go:263] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:46 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:34 SystemTime:2021-08-11 00:53:07.429417039 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226263040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0811 00:53:07.487464 1439337 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0811 00:53:07.487627 1439337 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0811 00:53:07.487648 1439337 cni.go:93] Creating CNI manager for ""
	I0811 00:53:07.487654 1439337 cni.go:154] 0 nodes found, recommending kindnet
	I0811 00:53:07.487671 1439337 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0811 00:53:07.487683 1439337 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0811 00:53:07.487688 1439337 start_flags.go:272] Found "CNI" CNI - setting NetworkPlugin=cni
	I0811 00:53:07.487699 1439337 start_flags.go:277] config:
	{Name:multinode-20210811005307-1387367 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:multinode-20210811005307-1387367 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0}
	I0811 00:53:07.490053 1439337 out.go:177] * Starting control plane node multinode-20210811005307-1387367 in cluster multinode-20210811005307-1387367
	I0811 00:53:07.490090 1439337 cache.go:117] Beginning downloading kic base image for docker with docker
	I0811 00:53:07.492672 1439337 out.go:177] * Pulling base image ...
	I0811 00:53:07.492712 1439337 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime docker
	I0811 00:53:07.492763 1439337 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-docker-overlay2-arm64.tar.lz4
	I0811 00:53:07.492775 1439337 cache.go:56] Caching tarball of preloaded images
	I0811 00:53:07.492966 1439337 preload.go:173] Found /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0811 00:53:07.492994 1439337 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on docker
	I0811 00:53:07.493384 1439337 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/config.json ...
	I0811 00:53:07.493423 1439337 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/config.json: {Name:mkfc3ef7858325d4b50a477430c66e7ccebc5920 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 00:53:07.493522 1439337 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	I0811 00:53:07.551182 1439337 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0811 00:53:07.551211 1439337 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0811 00:53:07.551227 1439337 cache.go:205] Successfully downloaded all kic artifacts
	I0811 00:53:07.551265 1439337 start.go:313] acquiring machines lock for multinode-20210811005307-1387367: {Name:mkb3178c18c35426cb33192cdbdabcbff217bc0a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0811 00:53:07.551967 1439337 start.go:317] acquired machines lock for "multinode-20210811005307-1387367" in 675.926µs
	I0811 00:53:07.552002 1439337 start.go:89] Provisioning new machine with config: &{Name:multinode-20210811005307-1387367 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:multinode-20210811005307-1387367 Namespace:default APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0811 00:53:07.552092 1439337 start.go:126] createHost starting for "" (driver="docker")
	I0811 00:53:07.557170 1439337 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0811 00:53:07.557458 1439337 start.go:160] libmachine.API.Create for "multinode-20210811005307-1387367" (driver="docker")
	I0811 00:53:07.557495 1439337 client.go:168] LocalClient.Create starting
	I0811 00:53:07.557565 1439337 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem
	I0811 00:53:07.557600 1439337 main.go:130] libmachine: Decoding PEM data...
	I0811 00:53:07.557622 1439337 main.go:130] libmachine: Parsing certificate...
	I0811 00:53:07.557740 1439337 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem
	I0811 00:53:07.557761 1439337 main.go:130] libmachine: Decoding PEM data...
	I0811 00:53:07.557785 1439337 main.go:130] libmachine: Parsing certificate...
	I0811 00:53:07.558163 1439337 cli_runner.go:115] Run: docker network inspect multinode-20210811005307-1387367 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0811 00:53:07.589516 1439337 cli_runner.go:162] docker network inspect multinode-20210811005307-1387367 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0811 00:53:07.589604 1439337 network_create.go:255] running [docker network inspect multinode-20210811005307-1387367] to gather additional debugging logs...
	I0811 00:53:07.589630 1439337 cli_runner.go:115] Run: docker network inspect multinode-20210811005307-1387367
	W0811 00:53:07.620364 1439337 cli_runner.go:162] docker network inspect multinode-20210811005307-1387367 returned with exit code 1
	I0811 00:53:07.620397 1439337 network_create.go:258] error running [docker network inspect multinode-20210811005307-1387367]: docker network inspect multinode-20210811005307-1387367: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-20210811005307-1387367
	I0811 00:53:07.620422 1439337 network_create.go:260] output of [docker network inspect multinode-20210811005307-1387367]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-20210811005307-1387367
	
	** /stderr **
	I0811 00:53:07.620476 1439337 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0811 00:53:07.652329 1439337 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0x400086ac48] misses:0}
	I0811 00:53:07.652378 1439337 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0811 00:53:07.652395 1439337 network_create.go:106] attempt to create docker network multinode-20210811005307-1387367 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0811 00:53:07.652451 1439337 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20210811005307-1387367
	I0811 00:53:07.719095 1439337 network_create.go:90] docker network multinode-20210811005307-1387367 192.168.49.0/24 created
	I0811 00:53:07.719128 1439337 kic.go:106] calculated static IP "192.168.49.2" for the "multinode-20210811005307-1387367" container
	I0811 00:53:07.719192 1439337 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0811 00:53:07.749649 1439337 cli_runner.go:115] Run: docker volume create multinode-20210811005307-1387367 --label name.minikube.sigs.k8s.io=multinode-20210811005307-1387367 --label created_by.minikube.sigs.k8s.io=true
	I0811 00:53:07.781727 1439337 oci.go:102] Successfully created a docker volume multinode-20210811005307-1387367
	I0811 00:53:07.781826 1439337 cli_runner.go:115] Run: docker run --rm --name multinode-20210811005307-1387367-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-20210811005307-1387367 --entrypoint /usr/bin/test -v multinode-20210811005307-1387367:/var gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -d /var/lib
	I0811 00:53:08.390978 1439337 oci.go:106] Successfully prepared a docker volume multinode-20210811005307-1387367
	W0811 00:53:08.391031 1439337 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0811 00:53:08.391038 1439337 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0811 00:53:08.391115 1439337 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0811 00:53:08.391321 1439337 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime docker
	I0811 00:53:08.391343 1439337 kic.go:179] Starting extracting preloaded images to volume ...
	I0811 00:53:08.391394 1439337 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v multinode-20210811005307-1387367:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir
	I0811 00:53:08.519786 1439337 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-20210811005307-1387367 --name multinode-20210811005307-1387367 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-20210811005307-1387367 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-20210811005307-1387367 --network multinode-20210811005307-1387367 --ip 192.168.49.2 --volume multinode-20210811005307-1387367:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79
	I0811 00:53:09.058814 1439337 cli_runner.go:115] Run: docker container inspect multinode-20210811005307-1387367 --format={{.State.Running}}
	I0811 00:53:09.115523 1439337 cli_runner.go:115] Run: docker container inspect multinode-20210811005307-1387367 --format={{.State.Status}}
	I0811 00:53:09.171781 1439337 cli_runner.go:115] Run: docker exec multinode-20210811005307-1387367 stat /var/lib/dpkg/alternatives/iptables
	I0811 00:53:09.326769 1439337 oci.go:278] the created container "multinode-20210811005307-1387367" has a running status.
	I0811 00:53:09.326799 1439337 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210811005307-1387367/id_rsa...
	I0811 00:53:09.536735 1439337 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210811005307-1387367/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0811 00:53:09.536785 1439337 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210811005307-1387367/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0811 00:53:09.708292 1439337 cli_runner.go:115] Run: docker container inspect multinode-20210811005307-1387367 --format={{.State.Status}}
	I0811 00:53:09.759288 1439337 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0811 00:53:09.759305 1439337 kic_runner.go:115] Args: [docker exec --privileged multinode-20210811005307-1387367 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0811 00:53:18.353799 1439337 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v multinode-20210811005307-1387367:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir: (9.962370689s)
	I0811 00:53:18.353827 1439337 kic.go:188] duration metric: took 9.962481 seconds to extract preloaded images to volume
	I0811 00:53:18.353915 1439337 cli_runner.go:115] Run: docker container inspect multinode-20210811005307-1387367 --format={{.State.Status}}
	I0811 00:53:18.401891 1439337 machine.go:88] provisioning docker machine ...
	I0811 00:53:18.401923 1439337 ubuntu.go:169] provisioning hostname "multinode-20210811005307-1387367"
	I0811 00:53:18.401989 1439337 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210811005307-1387367
	I0811 00:53:18.447406 1439337 main.go:130] libmachine: Using SSH client type: native
	I0811 00:53:18.447602 1439337 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil>  [] 0s} 127.0.0.1 50285 <nil> <nil>}
	I0811 00:53:18.447616 1439337 main.go:130] libmachine: About to run SSH command:
	sudo hostname multinode-20210811005307-1387367 && echo "multinode-20210811005307-1387367" | sudo tee /etc/hostname
	I0811 00:53:18.578026 1439337 main.go:130] libmachine: SSH cmd err, output: <nil>: multinode-20210811005307-1387367
	
	I0811 00:53:18.578124 1439337 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210811005307-1387367
	I0811 00:53:18.621976 1439337 main.go:130] libmachine: Using SSH client type: native
	I0811 00:53:18.622154 1439337 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil>  [] 0s} 127.0.0.1 50285 <nil> <nil>}
	I0811 00:53:18.622175 1439337 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-20210811005307-1387367' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-20210811005307-1387367/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-20210811005307-1387367' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0811 00:53:18.744696 1439337 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0811 00:53:18.744724 1439337 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/k
ey.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube}
	I0811 00:53:18.744743 1439337 ubuntu.go:177] setting up certificates
	I0811 00:53:18.744752 1439337 provision.go:83] configureAuth start
	I0811 00:53:18.744813 1439337 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210811005307-1387367
	I0811 00:53:18.777144 1439337 provision.go:137] copyHostCerts
	I0811 00:53:18.777184 1439337 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem
	I0811 00:53:18.777212 1439337 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem, removing ...
	I0811 00:53:18.777224 1439337 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem
	I0811 00:53:18.777300 1439337 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem (1082 bytes)
	I0811 00:53:18.777379 1439337 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem
	I0811 00:53:18.777409 1439337 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem, removing ...
	I0811 00:53:18.777418 1439337 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem
	I0811 00:53:18.777442 1439337 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem (1123 bytes)
	I0811 00:53:18.777484 1439337 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem
	I0811 00:53:18.777504 1439337 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem, removing ...
	I0811 00:53:18.777513 1439337 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem
	I0811 00:53:18.777533 1439337 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem (1679 bytes)
	I0811 00:53:18.777573 1439337 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem org=jenkins.multinode-20210811005307-1387367 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-20210811005307-1387367]
	I0811 00:53:19.088368 1439337 provision.go:171] copyRemoteCerts
	I0811 00:53:19.088459 1439337 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0811 00:53:19.088517 1439337 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210811005307-1387367
	I0811 00:53:19.120358 1439337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50285 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210811005307-1387367/id_rsa Username:docker}
	I0811 00:53:19.203800 1439337 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0811 00:53:19.203855 1439337 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0811 00:53:19.220584 1439337 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0811 00:53:19.220680 1439337 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0811 00:53:19.237619 1439337 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0811 00:53:19.237670 1439337 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0811 00:53:19.254142 1439337 provision.go:86] duration metric: configureAuth took 509.370926ms
	I0811 00:53:19.254165 1439337 ubuntu.go:193] setting minikube options for container-runtime
	I0811 00:53:19.254377 1439337 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210811005307-1387367
	I0811 00:53:19.286358 1439337 main.go:130] libmachine: Using SSH client type: native
	I0811 00:53:19.286533 1439337 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil>  [] 0s} 127.0.0.1 50285 <nil> <nil>}
	I0811 00:53:19.286551 1439337 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0811 00:53:19.400879 1439337 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0811 00:53:19.400903 1439337 ubuntu.go:71] root file system type: overlay
	I0811 00:53:19.401076 1439337 provision.go:308] Updating docker unit: /lib/systemd/system/docker.service ...
	I0811 00:53:19.401144 1439337 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210811005307-1387367
	I0811 00:53:19.435116 1439337 main.go:130] libmachine: Using SSH client type: native
	I0811 00:53:19.435297 1439337 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil>  [] 0s} 127.0.0.1 50285 <nil> <nil>}
	I0811 00:53:19.435397 1439337 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0811 00:53:19.557864 1439337 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0811 00:53:19.557996 1439337 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210811005307-1387367
	I0811 00:53:19.591634 1439337 main.go:130] libmachine: Using SSH client type: native
	I0811 00:53:19.591806 1439337 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil>  [] 0s} 127.0.0.1 50285 <nil> <nil>}
	I0811 00:53:19.591839 1439337 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0811 00:53:20.493929 1439337 main.go:130] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-06-02 11:55:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2021-08-11 00:53:19.549031023 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	+BindsTo=containerd.service
	 After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0811 00:53:20.493958 1439337 machine.go:91] provisioned docker machine in 2.09204609s
	I0811 00:53:20.493974 1439337 client.go:171] LocalClient.Create took 12.936470185s
	I0811 00:53:20.493998 1439337 start.go:168] duration metric: libmachine.API.Create for "multinode-20210811005307-1387367" took 12.93653983s
	I0811 00:53:20.494013 1439337 start.go:267] post-start starting for "multinode-20210811005307-1387367" (driver="docker")
	I0811 00:53:20.494018 1439337 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0811 00:53:20.494089 1439337 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0811 00:53:20.494135 1439337 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210811005307-1387367
	I0811 00:53:20.531089 1439337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50285 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210811005307-1387367/id_rsa Username:docker}
	I0811 00:53:20.616403 1439337 ssh_runner.go:149] Run: cat /etc/os-release
	I0811 00:53:20.618850 1439337 command_runner.go:124] > NAME="Ubuntu"
	I0811 00:53:20.618868 1439337 command_runner.go:124] > VERSION="20.04.2 LTS (Focal Fossa)"
	I0811 00:53:20.618874 1439337 command_runner.go:124] > ID=ubuntu
	I0811 00:53:20.618879 1439337 command_runner.go:124] > ID_LIKE=debian
	I0811 00:53:20.618886 1439337 command_runner.go:124] > PRETTY_NAME="Ubuntu 20.04.2 LTS"
	I0811 00:53:20.618896 1439337 command_runner.go:124] > VERSION_ID="20.04"
	I0811 00:53:20.618903 1439337 command_runner.go:124] > HOME_URL="https://www.ubuntu.com/"
	I0811 00:53:20.618913 1439337 command_runner.go:124] > SUPPORT_URL="https://help.ubuntu.com/"
	I0811 00:53:20.618921 1439337 command_runner.go:124] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0811 00:53:20.618931 1439337 command_runner.go:124] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0811 00:53:20.618937 1439337 command_runner.go:124] > VERSION_CODENAME=focal
	I0811 00:53:20.618942 1439337 command_runner.go:124] > UBUNTU_CODENAME=focal
	I0811 00:53:20.619199 1439337 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0811 00:53:20.619221 1439337 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0811 00:53:20.619232 1439337 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0811 00:53:20.619244 1439337 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0811 00:53:20.619253 1439337 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/addons for local assets ...
	I0811 00:53:20.619310 1439337 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files for local assets ...
	I0811 00:53:20.619402 1439337 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/13873672.pem -> 13873672.pem in /etc/ssl/certs
	I0811 00:53:20.619413 1439337 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/13873672.pem -> /etc/ssl/certs/13873672.pem
	I0811 00:53:20.619503 1439337 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0811 00:53:20.625943 1439337 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/13873672.pem --> /etc/ssl/certs/13873672.pem (1708 bytes)
	I0811 00:53:20.642886 1439337 start.go:270] post-start completed in 148.85973ms
	I0811 00:53:20.643292 1439337 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210811005307-1387367
	I0811 00:53:20.674817 1439337 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/config.json ...
	I0811 00:53:20.675066 1439337 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0811 00:53:20.675117 1439337 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210811005307-1387367
	I0811 00:53:20.707066 1439337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50285 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210811005307-1387367/id_rsa Username:docker}
	I0811 00:53:20.789063 1439337 command_runner.go:124] > 79%!
	(MISSING)I0811 00:53:20.789095 1439337 start.go:129] duration metric: createHost completed in 13.236994962s
	I0811 00:53:20.789106 1439337 start.go:80] releasing machines lock for "multinode-20210811005307-1387367", held for 13.237121812s
	I0811 00:53:20.789189 1439337 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210811005307-1387367
	I0811 00:53:20.820583 1439337 ssh_runner.go:149] Run: systemctl --version
	I0811 00:53:20.820633 1439337 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210811005307-1387367
	I0811 00:53:20.820636 1439337 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0811 00:53:20.820696 1439337 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210811005307-1387367
	I0811 00:53:20.865413 1439337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50285 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210811005307-1387367/id_rsa Username:docker}
	I0811 00:53:20.881118 1439337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50285 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210811005307-1387367/id_rsa Username:docker}
	I0811 00:53:21.168801 1439337 command_runner.go:124] > <HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
	I0811 00:53:21.168823 1439337 command_runner.go:124] > <TITLE>302 Moved</TITLE></HEAD><BODY>
	I0811 00:53:21.168830 1439337 command_runner.go:124] > <H1>302 Moved</H1>
	I0811 00:53:21.168835 1439337 command_runner.go:124] > The document has moved
	I0811 00:53:21.168844 1439337 command_runner.go:124] > <A HREF="https://cloud.google.com/container-registry/">here</A>.
	I0811 00:53:21.168849 1439337 command_runner.go:124] > </BODY></HTML>
	I0811 00:53:21.168884 1439337 command_runner.go:124] > systemd 245 (245.4-4ubuntu3.7)
	I0811 00:53:21.168908 1439337 command_runner.go:124] > +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid
	I0811 00:53:21.169002 1439337 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0811 00:53:21.177915 1439337 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0811 00:53:21.185867 1439337 command_runner.go:124] > # /lib/systemd/system/docker.service
	I0811 00:53:21.186753 1439337 command_runner.go:124] > [Unit]
	I0811 00:53:21.186788 1439337 command_runner.go:124] > Description=Docker Application Container Engine
	I0811 00:53:21.186795 1439337 command_runner.go:124] > Documentation=https://docs.docker.com
	I0811 00:53:21.186802 1439337 command_runner.go:124] > BindsTo=containerd.service
	I0811 00:53:21.186811 1439337 command_runner.go:124] > After=network-online.target firewalld.service containerd.service
	I0811 00:53:21.186826 1439337 command_runner.go:124] > Wants=network-online.target
	I0811 00:53:21.186832 1439337 command_runner.go:124] > Requires=docker.socket
	I0811 00:53:21.186841 1439337 command_runner.go:124] > StartLimitBurst=3
	I0811 00:53:21.186846 1439337 command_runner.go:124] > StartLimitIntervalSec=60
	I0811 00:53:21.186850 1439337 command_runner.go:124] > [Service]
	I0811 00:53:21.186854 1439337 command_runner.go:124] > Type=notify
	I0811 00:53:21.186859 1439337 command_runner.go:124] > Restart=on-failure
	I0811 00:53:21.186869 1439337 command_runner.go:124] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0811 00:53:21.186884 1439337 command_runner.go:124] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0811 00:53:21.186896 1439337 command_runner.go:124] > # here is to clear out that command inherited from the base configuration. Without this,
	I0811 00:53:21.186908 1439337 command_runner.go:124] > # the command from the base configuration and the command specified here are treated as
	I0811 00:53:21.186918 1439337 command_runner.go:124] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0811 00:53:21.186930 1439337 command_runner.go:124] > # will catch this invalid input and refuse to start the service with an error like:
	I0811 00:53:21.186941 1439337 command_runner.go:124] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0811 00:53:21.186956 1439337 command_runner.go:124] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0811 00:53:21.186966 1439337 command_runner.go:124] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0811 00:53:21.186970 1439337 command_runner.go:124] > ExecStart=
	I0811 00:53:21.187000 1439337 command_runner.go:124] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0811 00:53:21.187010 1439337 command_runner.go:124] > ExecReload=/bin/kill -s HUP $MAINPID
	I0811 00:53:21.187021 1439337 command_runner.go:124] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0811 00:53:21.187034 1439337 command_runner.go:124] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0811 00:53:21.187041 1439337 command_runner.go:124] > LimitNOFILE=infinity
	I0811 00:53:21.187047 1439337 command_runner.go:124] > LimitNPROC=infinity
	I0811 00:53:21.187051 1439337 command_runner.go:124] > LimitCORE=infinity
	I0811 00:53:21.187062 1439337 command_runner.go:124] > # Uncomment TasksMax if your systemd version supports it.
	I0811 00:53:21.187071 1439337 command_runner.go:124] > # Only systemd 226 and above support this version.
	I0811 00:53:21.187076 1439337 command_runner.go:124] > TasksMax=infinity
	I0811 00:53:21.187088 1439337 command_runner.go:124] > TimeoutStartSec=0
	I0811 00:53:21.187097 1439337 command_runner.go:124] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0811 00:53:21.187107 1439337 command_runner.go:124] > Delegate=yes
	I0811 00:53:21.187115 1439337 command_runner.go:124] > # kill only the docker process, not all processes in the cgroup
	I0811 00:53:21.187125 1439337 command_runner.go:124] > KillMode=process
	I0811 00:53:21.187129 1439337 command_runner.go:124] > [Install]
	I0811 00:53:21.187134 1439337 command_runner.go:124] > WantedBy=multi-user.target
	I0811 00:53:21.188214 1439337 cruntime.go:249] skipping containerd shutdown because we are bound to it
	I0811 00:53:21.188297 1439337 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0811 00:53:21.197313 1439337 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0811 00:53:21.208334 1439337 command_runner.go:124] > runtime-endpoint: unix:///var/run/dockershim.sock
	I0811 00:53:21.208358 1439337 command_runner.go:124] > image-endpoint: unix:///var/run/dockershim.sock
	I0811 00:53:21.209738 1439337 ssh_runner.go:149] Run: sudo systemctl unmask docker.service
	I0811 00:53:21.297323 1439337 ssh_runner.go:149] Run: sudo systemctl enable docker.socket
	I0811 00:53:21.372264 1439337 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0811 00:53:21.380510 1439337 command_runner.go:124] > # /lib/systemd/system/docker.service
	I0811 00:53:21.381058 1439337 command_runner.go:124] > [Unit]
	I0811 00:53:21.381097 1439337 command_runner.go:124] > Description=Docker Application Container Engine
	I0811 00:53:21.381133 1439337 command_runner.go:124] > Documentation=https://docs.docker.com
	I0811 00:53:21.381157 1439337 command_runner.go:124] > BindsTo=containerd.service
	I0811 00:53:21.381177 1439337 command_runner.go:124] > After=network-online.target firewalld.service containerd.service
	I0811 00:53:21.381217 1439337 command_runner.go:124] > Wants=network-online.target
	I0811 00:53:21.381239 1439337 command_runner.go:124] > Requires=docker.socket
	I0811 00:53:21.381328 1439337 command_runner.go:124] > StartLimitBurst=3
	I0811 00:53:21.381353 1439337 command_runner.go:124] > StartLimitIntervalSec=60
	I0811 00:53:21.381370 1439337 command_runner.go:124] > [Service]
	I0811 00:53:21.381384 1439337 command_runner.go:124] > Type=notify
	I0811 00:53:21.381413 1439337 command_runner.go:124] > Restart=on-failure
	I0811 00:53:21.381438 1439337 command_runner.go:124] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0811 00:53:21.381460 1439337 command_runner.go:124] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0811 00:53:21.381494 1439337 command_runner.go:124] > # here is to clear out that command inherited from the base configuration. Without this,
	I0811 00:53:21.381518 1439337 command_runner.go:124] > # the command from the base configuration and the command specified here are treated as
	I0811 00:53:21.381539 1439337 command_runner.go:124] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0811 00:53:21.381573 1439337 command_runner.go:124] > # will catch this invalid input and refuse to start the service with an error like:
	I0811 00:53:21.381599 1439337 command_runner.go:124] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0811 00:53:21.381621 1439337 command_runner.go:124] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0811 00:53:21.381653 1439337 command_runner.go:124] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0811 00:53:21.381668 1439337 command_runner.go:124] > ExecStart=
	I0811 00:53:21.381707 1439337 command_runner.go:124] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0811 00:53:21.381739 1439337 command_runner.go:124] > ExecReload=/bin/kill -s HUP $MAINPID
	I0811 00:53:21.381757 1439337 command_runner.go:124] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0811 00:53:21.381770 1439337 command_runner.go:124] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0811 00:53:21.381785 1439337 command_runner.go:124] > LimitNOFILE=infinity
	I0811 00:53:21.381813 1439337 command_runner.go:124] > LimitNPROC=infinity
	I0811 00:53:21.381828 1439337 command_runner.go:124] > LimitCORE=infinity
	I0811 00:53:21.381842 1439337 command_runner.go:124] > # Uncomment TasksMax if your systemd version supports it.
	I0811 00:53:21.381858 1439337 command_runner.go:124] > # Only systemd 226 and above support this version.
	I0811 00:53:21.381863 1439337 command_runner.go:124] > TasksMax=infinity
	I0811 00:53:21.381872 1439337 command_runner.go:124] > TimeoutStartSec=0
	I0811 00:53:21.381882 1439337 command_runner.go:124] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0811 00:53:21.381889 1439337 command_runner.go:124] > Delegate=yes
	I0811 00:53:21.381897 1439337 command_runner.go:124] > # kill only the docker process, not all processes in the cgroup
	I0811 00:53:21.381919 1439337 command_runner.go:124] > KillMode=process
	I0811 00:53:21.381929 1439337 command_runner.go:124] > [Install]
	I0811 00:53:21.381935 1439337 command_runner.go:124] > WantedBy=multi-user.target
	I0811 00:53:21.382224 1439337 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0811 00:53:21.470440 1439337 ssh_runner.go:149] Run: sudo systemctl start docker
	I0811 00:53:21.479436 1439337 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0811 00:53:21.525516 1439337 command_runner.go:124] > 20.10.7
	I0811 00:53:21.528681 1439337 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0811 00:53:21.574595 1439337 command_runner.go:124] > 20.10.7
	I0811 00:53:21.582609 1439337 out.go:204] * Preparing Kubernetes v1.21.3 on Docker 20.10.7 ...
	I0811 00:53:21.582720 1439337 cli_runner.go:115] Run: docker network inspect multinode-20210811005307-1387367 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0811 00:53:21.613380 1439337 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0811 00:53:21.616635 1439337 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0811 00:53:21.625779 1439337 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime docker
	I0811 00:53:21.625847 1439337 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0811 00:53:21.663044 1439337 command_runner.go:124] > k8s.gcr.io/kube-apiserver:v1.21.3
	I0811 00:53:21.663068 1439337 command_runner.go:124] > k8s.gcr.io/kube-proxy:v1.21.3
	I0811 00:53:21.663077 1439337 command_runner.go:124] > k8s.gcr.io/kube-controller-manager:v1.21.3
	I0811 00:53:21.663084 1439337 command_runner.go:124] > k8s.gcr.io/kube-scheduler:v1.21.3
	I0811 00:53:21.663091 1439337 command_runner.go:124] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0811 00:53:21.663097 1439337 command_runner.go:124] > k8s.gcr.io/pause:3.4.1
	I0811 00:53:21.663106 1439337 command_runner.go:124] > kubernetesui/dashboard:v2.1.0
	I0811 00:53:21.663112 1439337 command_runner.go:124] > k8s.gcr.io/coredns/coredns:v1.8.0
	I0811 00:53:21.663118 1439337 command_runner.go:124] > k8s.gcr.io/etcd:3.4.13-0
	I0811 00:53:21.663125 1439337 command_runner.go:124] > kubernetesui/metrics-scraper:v1.0.4
	I0811 00:53:21.663315 1439337 docker.go:535] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.21.3
	k8s.gcr.io/kube-proxy:v1.21.3
	k8s.gcr.io/kube-controller-manager:v1.21.3
	k8s.gcr.io/kube-scheduler:v1.21.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.4.1
	kubernetesui/dashboard:v2.1.0
	k8s.gcr.io/coredns/coredns:v1.8.0
	k8s.gcr.io/etcd:3.4.13-0
	kubernetesui/metrics-scraper:v1.0.4
	
	-- /stdout --
	I0811 00:53:21.663339 1439337 docker.go:466] Images already preloaded, skipping extraction
	I0811 00:53:21.663389 1439337 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0811 00:53:21.697655 1439337 command_runner.go:124] > k8s.gcr.io/kube-apiserver:v1.21.3
	I0811 00:53:21.697676 1439337 command_runner.go:124] > k8s.gcr.io/kube-controller-manager:v1.21.3
	I0811 00:53:21.697682 1439337 command_runner.go:124] > k8s.gcr.io/kube-proxy:v1.21.3
	I0811 00:53:21.697689 1439337 command_runner.go:124] > k8s.gcr.io/kube-scheduler:v1.21.3
	I0811 00:53:21.697696 1439337 command_runner.go:124] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0811 00:53:21.697702 1439337 command_runner.go:124] > k8s.gcr.io/pause:3.4.1
	I0811 00:53:21.697707 1439337 command_runner.go:124] > kubernetesui/dashboard:v2.1.0
	I0811 00:53:21.697714 1439337 command_runner.go:124] > k8s.gcr.io/coredns/coredns:v1.8.0
	I0811 00:53:21.697719 1439337 command_runner.go:124] > k8s.gcr.io/etcd:3.4.13-0
	I0811 00:53:21.697726 1439337 command_runner.go:124] > kubernetesui/metrics-scraper:v1.0.4
	I0811 00:53:21.700611 1439337 docker.go:535] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.21.3
	k8s.gcr.io/kube-controller-manager:v1.21.3
	k8s.gcr.io/kube-proxy:v1.21.3
	k8s.gcr.io/kube-scheduler:v1.21.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.4.1
	kubernetesui/dashboard:v2.1.0
	k8s.gcr.io/coredns/coredns:v1.8.0
	k8s.gcr.io/etcd:3.4.13-0
	kubernetesui/metrics-scraper:v1.0.4
	
	-- /stdout --
	I0811 00:53:21.700631 1439337 cache_images.go:74] Images are preloaded, skipping loading
	I0811 00:53:21.700685 1439337 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}}
	I0811 00:53:21.784629 1439337 command_runner.go:124] > cgroupfs
	I0811 00:53:21.787914 1439337 cni.go:93] Creating CNI manager for ""
	I0811 00:53:21.787932 1439337 cni.go:154] 1 nodes found, recommending kindnet
	I0811 00:53:21.787947 1439337 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0811 00:53:21.787965 1439337 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-20210811005307-1387367 NodeName:multinode-20210811005307-1387367 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile
:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0811 00:53:21.788104 1439337 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "multinode-20210811005307-1387367"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0811 00:53:21.788190 1439337 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=multinode-20210811005307-1387367 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:multinode-20210811005307-1387367 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0811 00:53:21.788260 1439337 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0811 00:53:21.794232 1439337 command_runner.go:124] > kubeadm
	I0811 00:53:21.794247 1439337 command_runner.go:124] > kubectl
	I0811 00:53:21.794251 1439337 command_runner.go:124] > kubelet
	I0811 00:53:21.795113 1439337 binaries.go:44] Found k8s binaries, skipping transfer
	I0811 00:53:21.795173 1439337 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0811 00:53:21.801536 1439337 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (410 bytes)
	I0811 00:53:21.814244 1439337 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0811 00:53:21.826825 1439337 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2075 bytes)
	I0811 00:53:21.839628 1439337 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0811 00:53:21.842527 1439337 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0811 00:53:21.850908 1439337 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367 for IP: 192.168.49.2
	I0811 00:53:21.850961 1439337 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key
	I0811 00:53:21.850979 1439337 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key
	I0811 00:53:21.851044 1439337 certs.go:294] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/client.key
	I0811 00:53:21.851054 1439337 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/client.crt with IP's: []
	I0811 00:53:22.505827 1439337 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/client.crt ...
	I0811 00:53:22.505863 1439337 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/client.crt: {Name:mk58f59506cf4b15ae5dff9968b342b9b4dd6dde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 00:53:22.506102 1439337 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/client.key ...
	I0811 00:53:22.506121 1439337 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/client.key: {Name:mk2a7a5ac082a1beab738847cb0aefdb72ccf8a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 00:53:22.506221 1439337 certs.go:294] generating minikube signed cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/apiserver.key.dd3b5fb2
	I0811 00:53:22.506233 1439337 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0811 00:53:22.699824 1439337 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/apiserver.crt.dd3b5fb2 ...
	I0811 00:53:22.699859 1439337 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/apiserver.crt.dd3b5fb2: {Name:mkbf0e6641b314b363b4d83a714510687c837e0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 00:53:22.700629 1439337 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/apiserver.key.dd3b5fb2 ...
	I0811 00:53:22.700646 1439337 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/apiserver.key.dd3b5fb2: {Name:mk92f9412a610334bc78bfca72003203465eeffa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 00:53:22.700737 1439337 certs.go:305] copying /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/apiserver.crt
	I0811 00:53:22.700804 1439337 certs.go:309] copying /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/apiserver.key
	I0811 00:53:22.700853 1439337 certs.go:294] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/proxy-client.key
	I0811 00:53:22.700864 1439337 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/proxy-client.crt with IP's: []
	I0811 00:53:23.020186 1439337 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/proxy-client.crt ...
	I0811 00:53:23.020224 1439337 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/proxy-client.crt: {Name:mk272d910979ad8934befd818ab4132904463f9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 00:53:23.020428 1439337 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/proxy-client.key ...
	I0811 00:53:23.020442 1439337 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/proxy-client.key: {Name:mkea646513b6d583b727a9ece0d7a3b48dc4aa12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 00:53:23.020535 1439337 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0811 00:53:23.020558 1439337 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0811 00:53:23.020576 1439337 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0811 00:53:23.020591 1439337 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0811 00:53:23.020608 1439337 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0811 00:53:23.020622 1439337 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0811 00:53:23.020637 1439337 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0811 00:53:23.020647 1439337 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0811 00:53:23.020701 1439337 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/1387367.pem (1338 bytes)
	W0811 00:53:23.020741 1439337 certs.go:369] ignoring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/1387367_empty.pem, impossibly tiny 0 bytes
	I0811 00:53:23.020755 1439337 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem (1675 bytes)
	I0811 00:53:23.020792 1439337 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem (1082 bytes)
	I0811 00:53:23.020819 1439337 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem (1123 bytes)
	I0811 00:53:23.020846 1439337 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem (1679 bytes)
	I0811 00:53:23.020892 1439337 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/13873672.pem (1708 bytes)
	I0811 00:53:23.020924 1439337 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/1387367.pem -> /usr/share/ca-certificates/1387367.pem
	I0811 00:53:23.020939 1439337 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/13873672.pem -> /usr/share/ca-certificates/13873672.pem
	I0811 00:53:23.020950 1439337 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0811 00:53:23.022022 1439337 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0811 00:53:23.038818 1439337 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0811 00:53:23.055261 1439337 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0811 00:53:23.071449 1439337 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0811 00:53:23.087530 1439337 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0811 00:53:23.103450 1439337 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0811 00:53:23.119851 1439337 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0811 00:53:23.136896 1439337 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0811 00:53:23.153418 1439337 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/1387367.pem --> /usr/share/ca-certificates/1387367.pem (1338 bytes)
	I0811 00:53:23.169842 1439337 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/13873672.pem --> /usr/share/ca-certificates/13873672.pem (1708 bytes)
	I0811 00:53:23.186461 1439337 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0811 00:53:23.202735 1439337 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0811 00:53:23.214909 1439337 ssh_runner.go:149] Run: openssl version
	I0811 00:53:23.219675 1439337 command_runner.go:124] > OpenSSL 1.1.1f  31 Mar 2020
	I0811 00:53:23.219752 1439337 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13873672.pem && ln -fs /usr/share/ca-certificates/13873672.pem /etc/ssl/certs/13873672.pem"
	I0811 00:53:23.226770 1439337 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/13873672.pem
	I0811 00:53:23.229514 1439337 command_runner.go:124] > -rw-r--r-- 1 root root 1708 Aug 11 00:46 /usr/share/ca-certificates/13873672.pem
	I0811 00:53:23.229748 1439337 certs.go:416] hashing: -rw-r--r-- 1 root root 1708 Aug 11 00:46 /usr/share/ca-certificates/13873672.pem
	I0811 00:53:23.229797 1439337 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13873672.pem
	I0811 00:53:23.234167 1439337 command_runner.go:124] > 3ec20f2e
	I0811 00:53:23.234563 1439337 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/13873672.pem /etc/ssl/certs/3ec20f2e.0"
	I0811 00:53:23.241485 1439337 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0811 00:53:23.248101 1439337 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0811 00:53:23.250773 1439337 command_runner.go:124] > -rw-r--r-- 1 root root 1111 Aug 11 00:30 /usr/share/ca-certificates/minikubeCA.pem
	I0811 00:53:23.251053 1439337 certs.go:416] hashing: -rw-r--r-- 1 root root 1111 Aug 11 00:30 /usr/share/ca-certificates/minikubeCA.pem
	I0811 00:53:23.251096 1439337 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0811 00:53:23.255479 1439337 command_runner.go:124] > b5213941
	I0811 00:53:23.255852 1439337 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0811 00:53:23.262757 1439337 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1387367.pem && ln -fs /usr/share/ca-certificates/1387367.pem /etc/ssl/certs/1387367.pem"
	I0811 00:53:23.269510 1439337 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1387367.pem
	I0811 00:53:23.272214 1439337 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Aug 11 00:46 /usr/share/ca-certificates/1387367.pem
	I0811 00:53:23.272458 1439337 certs.go:416] hashing: -rw-r--r-- 1 root root 1338 Aug 11 00:46 /usr/share/ca-certificates/1387367.pem
	I0811 00:53:23.272501 1439337 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1387367.pem
	I0811 00:53:23.277002 1439337 command_runner.go:124] > 51391683
	I0811 00:53:23.277586 1439337 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1387367.pem /etc/ssl/certs/51391683.0"
	I0811 00:53:23.284533 1439337 kubeadm.go:390] StartCluster: {Name:multinode-20210811005307-1387367 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:multinode-20210811005307-1387367 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServ
erIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0}
	I0811 00:53:23.284684 1439337 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0811 00:53:23.320406 1439337 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0811 00:53:23.327257 1439337 command_runner.go:124] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0811 00:53:23.327319 1439337 command_runner.go:124] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0811 00:53:23.327339 1439337 command_runner.go:124] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0811 00:53:23.327424 1439337 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0811 00:53:23.334037 1439337 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0811 00:53:23.334121 1439337 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0811 00:53:23.340598 1439337 command_runner.go:124] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0811 00:53:23.340654 1439337 command_runner.go:124] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0811 00:53:23.340671 1439337 command_runner.go:124] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0811 00:53:23.340682 1439337 command_runner.go:124] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0811 00:53:23.340709 1439337 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0811 00:53:23.340746 1439337 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0811 00:53:23.489668 1439337 command_runner.go:124] > [init] Using Kubernetes version: v1.21.3
	I0811 00:53:23.489782 1439337 command_runner.go:124] > [preflight] Running pre-flight checks
	I0811 00:53:23.775568 1439337 command_runner.go:124] > [preflight] The system verification failed. Printing the output from the verification:
	I0811 00:53:23.775679 1439337 command_runner.go:124] > KERNEL_VERSION: 5.8.0-1041-aws
	I0811 00:53:23.775763 1439337 command_runner.go:124] > DOCKER_VERSION: 20.10.7
	I0811 00:53:23.775842 1439337 command_runner.go:124] > DOCKER_GRAPH_DRIVER: overlay2
	I0811 00:53:23.775917 1439337 command_runner.go:124] > OS: Linux
	I0811 00:53:23.776004 1439337 command_runner.go:124] > CGROUPS_CPU: enabled
	I0811 00:53:23.776094 1439337 command_runner.go:124] > CGROUPS_CPUACCT: enabled
	I0811 00:53:23.776169 1439337 command_runner.go:124] > CGROUPS_CPUSET: enabled
	I0811 00:53:23.776255 1439337 command_runner.go:124] > CGROUPS_DEVICES: enabled
	I0811 00:53:23.776330 1439337 command_runner.go:124] > CGROUPS_FREEZER: enabled
	I0811 00:53:23.776410 1439337 command_runner.go:124] > CGROUPS_MEMORY: enabled
	I0811 00:53:23.776480 1439337 command_runner.go:124] > CGROUPS_PIDS: enabled
	I0811 00:53:23.776559 1439337 command_runner.go:124] > CGROUPS_HUGETLB: enabled
	I0811 00:53:23.862226 1439337 command_runner.go:124] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0811 00:53:23.862377 1439337 command_runner.go:124] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0811 00:53:23.862507 1439337 command_runner.go:124] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0811 00:53:24.101454 1439337 out.go:204]   - Generating certificates and keys ...
	I0811 00:53:24.098465 1439337 command_runner.go:124] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0811 00:53:24.101714 1439337 command_runner.go:124] > [certs] Using existing ca certificate authority
	I0811 00:53:24.101834 1439337 command_runner.go:124] > [certs] Using existing apiserver certificate and key on disk
	I0811 00:53:24.489620 1439337 command_runner.go:124] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0811 00:53:24.745160 1439337 command_runner.go:124] > [certs] Generating "front-proxy-ca" certificate and key
	I0811 00:53:25.328492 1439337 command_runner.go:124] > [certs] Generating "front-proxy-client" certificate and key
	I0811 00:53:25.617819 1439337 command_runner.go:124] > [certs] Generating "etcd/ca" certificate and key
	I0811 00:53:25.976748 1439337 command_runner.go:124] > [certs] Generating "etcd/server" certificate and key
	I0811 00:53:25.977134 1439337 command_runner.go:124] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-20210811005307-1387367] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0811 00:53:26.549399 1439337 command_runner.go:124] > [certs] Generating "etcd/peer" certificate and key
	I0811 00:53:26.549791 1439337 command_runner.go:124] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-20210811005307-1387367] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0811 00:53:26.729655 1439337 command_runner.go:124] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0811 00:53:27.895229 1439337 command_runner.go:124] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0811 00:53:28.530493 1439337 command_runner.go:124] > [certs] Generating "sa" key and public key
	I0811 00:53:28.530825 1439337 command_runner.go:124] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0811 00:53:28.809125 1439337 command_runner.go:124] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0811 00:53:29.081586 1439337 command_runner.go:124] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0811 00:53:29.542201 1439337 command_runner.go:124] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0811 00:53:30.169254 1439337 command_runner.go:124] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0811 00:53:30.181487 1439337 command_runner.go:124] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0811 00:53:30.183215 1439337 command_runner.go:124] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0811 00:53:30.183269 1439337 command_runner.go:124] > [kubelet-start] Starting the kubelet
	I0811 00:53:30.279607 1439337 out.go:204]   - Booting up control plane ...
	I0811 00:53:30.277515 1439337 command_runner.go:124] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0811 00:53:30.279718 1439337 command_runner.go:124] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0811 00:53:30.289376 1439337 command_runner.go:124] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0811 00:53:30.295581 1439337 command_runner.go:124] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0811 00:53:30.296521 1439337 command_runner.go:124] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0811 00:53:30.299349 1439337 command_runner.go:124] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0811 00:53:45.804322 1439337 command_runner.go:124] > [apiclient] All control plane components are healthy after 15.503567 seconds
	I0811 00:53:45.804450 1439337 command_runner.go:124] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0811 00:53:45.815410 1439337 command_runner.go:124] > [kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
	I0811 00:53:46.342832 1439337 command_runner.go:124] > [upload-certs] Skipping phase. Please see --upload-certs
	I0811 00:53:46.343111 1439337 command_runner.go:124] > [mark-control-plane] Marking the node multinode-20210811005307-1387367 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0811 00:53:46.857125 1439337 out.go:204]   - Configuring RBAC rules ...
	I0811 00:53:46.854752 1439337 command_runner.go:124] > [bootstrap-token] Using token: wm9z73.gxcefuuupq33tt64
	I0811 00:53:46.857266 1439337 command_runner.go:124] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0811 00:53:46.861951 1439337 command_runner.go:124] > [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0811 00:53:46.870610 1439337 command_runner.go:124] > [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0811 00:53:46.873601 1439337 command_runner.go:124] > [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0811 00:53:46.876487 1439337 command_runner.go:124] > [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0811 00:53:46.879446 1439337 command_runner.go:124] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0811 00:53:46.890177 1439337 command_runner.go:124] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0811 00:53:47.205330 1439337 command_runner.go:124] > [addons] Applied essential addon: CoreDNS
	I0811 00:53:47.289378 1439337 command_runner.go:124] > [addons] Applied essential addon: kube-proxy
	I0811 00:53:47.289462 1439337 command_runner.go:124] > Your Kubernetes control-plane has initialized successfully!
	I0811 00:53:47.289552 1439337 command_runner.go:124] > To start using your cluster, you need to run the following as a regular user:
	I0811 00:53:47.289585 1439337 command_runner.go:124] >   mkdir -p $HOME/.kube
	I0811 00:53:47.289654 1439337 command_runner.go:124] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0811 00:53:47.289714 1439337 command_runner.go:124] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0811 00:53:47.289779 1439337 command_runner.go:124] > Alternatively, if you are the root user, you can run:
	I0811 00:53:47.289835 1439337 command_runner.go:124] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0811 00:53:47.289898 1439337 command_runner.go:124] > You should now deploy a pod network to the cluster.
	I0811 00:53:47.289984 1439337 command_runner.go:124] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0811 00:53:47.290063 1439337 command_runner.go:124] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0811 00:53:47.290160 1439337 command_runner.go:124] > You can now join any number of control-plane nodes by copying certificate authorities
	I0811 00:53:47.290249 1439337 command_runner.go:124] > and service account keys on each node and then running the following as root:
	I0811 00:53:47.290349 1439337 command_runner.go:124] >   kubeadm join control-plane.minikube.internal:8443 --token wm9z73.gxcefuuupq33tt64 \
	I0811 00:53:47.290465 1439337 command_runner.go:124] > 	--discovery-token-ca-cert-hash sha256:de7b801124e562bd66867fe5271994d6be7651a35fa31dfce01acdef2a9271b2 \
	I0811 00:53:47.290488 1439337 command_runner.go:124] > 	--control-plane 
	I0811 00:53:47.290583 1439337 command_runner.go:124] > Then you can join any number of worker nodes by running the following on each as root:
	I0811 00:53:47.290676 1439337 command_runner.go:124] > kubeadm join control-plane.minikube.internal:8443 --token wm9z73.gxcefuuupq33tt64 \
	I0811 00:53:47.290789 1439337 command_runner.go:124] > 	--discovery-token-ca-cert-hash sha256:de7b801124e562bd66867fe5271994d6be7651a35fa31dfce01acdef2a9271b2 
	I0811 00:53:47.299094 1439337 command_runner.go:124] ! 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0811 00:53:47.299479 1439337 command_runner.go:124] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.8.0-1041-aws\n", err: exit status 1
	I0811 00:53:47.299661 1439337 command_runner.go:124] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0811 00:53:47.299700 1439337 cni.go:93] Creating CNI manager for ""
	I0811 00:53:47.299713 1439337 cni.go:154] 1 nodes found, recommending kindnet
	I0811 00:53:47.302166 1439337 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0811 00:53:47.302231 1439337 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0811 00:53:47.309328 1439337 command_runner.go:124] >   File: /opt/cni/bin/portmap
	I0811 00:53:47.309350 1439337 command_runner.go:124] >   Size: 2603192   	Blocks: 5088       IO Block: 4096   regular file
	I0811 00:53:47.309359 1439337 command_runner.go:124] > Device: 3fh/63d	Inode: 2356928     Links: 1
	I0811 00:53:47.309368 1439337 command_runner.go:124] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0811 00:53:47.309374 1439337 command_runner.go:124] > Access: 2021-02-10 15:18:15.000000000 +0000
	I0811 00:53:47.309384 1439337 command_runner.go:124] > Modify: 2021-02-10 15:18:15.000000000 +0000
	I0811 00:53:47.309390 1439337 command_runner.go:124] > Change: 2021-07-02 14:49:52.887930340 +0000
	I0811 00:53:47.309395 1439337 command_runner.go:124] >  Birth: -
	I0811 00:53:47.309634 1439337 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0811 00:53:47.309648 1439337 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0811 00:53:47.332949 1439337 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0811 00:53:47.968060 1439337 command_runner.go:124] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0811 00:53:47.973734 1439337 command_runner.go:124] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0811 00:53:47.995686 1439337 command_runner.go:124] > serviceaccount/kindnet created
	I0811 00:53:48.003424 1439337 command_runner.go:124] > daemonset.apps/kindnet created
	I0811 00:53:48.008817 1439337 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0811 00:53:48.008932 1439337 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:53:48.008982 1439337 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=877a5691753f15214a0c269ac69dcdc5a4d99fcd minikube.k8s.io/name=multinode-20210811005307-1387367 minikube.k8s.io/updated_at=2021_08_11T00_53_48_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:53:48.025684 1439337 command_runner.go:124] > -16
	I0811 00:53:48.025759 1439337 ops.go:34] apiserver oom_adj: -16
	I0811 00:53:48.182751 1439337 command_runner.go:124] > node/multinode-20210811005307-1387367 labeled
	I0811 00:53:48.182805 1439337 command_runner.go:124] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0811 00:53:48.182892 1439337 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:53:48.269489 1439337 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0811 00:53:48.770253 1439337 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:53:48.855219 1439337 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0811 00:53:49.269787 1439337 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:53:49.355880 1439337 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0811 00:53:49.770563 1439337 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:53:49.851997 1439337 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0811 00:53:50.270377 1439337 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:53:50.358188 1439337 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0811 00:53:50.769718 1439337 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:53:50.857668 1439337 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0811 00:53:51.270183 1439337 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:53:51.398578 1439337 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0811 00:53:51.770197 1439337 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:53:51.853748 1439337 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0811 00:53:52.270541 1439337 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:53:52.364693 1439337 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0811 00:53:52.770210 1439337 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:53:52.859953 1439337 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0811 00:53:53.270549 1439337 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:53:53.360996 1439337 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0811 00:53:53.770505 1439337 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:53:53.862013 1439337 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0811 00:53:54.270508 1439337 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:53:54.366704 1439337 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0811 00:53:54.770339 1439337 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:53:54.849916 1439337 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0811 00:53:55.269950 1439337 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:53:55.361500 1439337 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0811 00:53:55.769737 1439337 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:53:55.854548 1439337 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0811 00:53:56.269737 1439337 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:53:56.367685 1439337 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0811 00:53:56.770203 1439337 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:53:56.858192 1439337 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0811 00:53:57.269663 1439337 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:53:57.367457 1439337 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0811 00:53:57.770035 1439337 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:53:57.858672 1439337 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0811 00:53:58.270192 1439337 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:53:58.423406 1439337 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0811 00:53:58.770740 1439337 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:53:58.866235 1439337 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0811 00:53:59.269712 1439337 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:53:59.371686 1439337 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0811 00:53:59.770197 1439337 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:53:59.896603 1439337 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0811 00:54:00.270186 1439337 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 00:54:00.369297 1439337 command_runner.go:124] > NAME      SECRETS   AGE
	I0811 00:54:00.369319 1439337 command_runner.go:124] > default   1         0s
	I0811 00:54:00.369338 1439337 kubeadm.go:985] duration metric: took 12.360452191s to wait for elevateKubeSystemPrivileges.
	I0811 00:54:00.369355 1439337 kubeadm.go:392] StartCluster complete in 37.08482894s
	I0811 00:54:00.369373 1439337 settings.go:142] acquiring lock: {Name:mk6e7f1e95cc0d18801bf31166529399345d1e40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 00:54:00.369456 1439337 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	I0811 00:54:00.370517 1439337 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig: {Name:mka174137207b71bb699e0c641682c96161f87c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 00:54:00.370999 1439337 loader.go:372] Config loaded from file:  /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	I0811 00:54:00.371278 1439337 kapi.go:59] client config for multinode-20210811005307-1387367: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-202
10811005307-1387367/client.key", CAFile:"/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1115760), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0811 00:54:00.372857 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0811 00:54:00.372881 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:00.372887 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:00.372892 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:00.373108 1439337 cert_rotation.go:137] Starting client certificate rotation controller
	I0811 00:54:00.402964 1439337 round_trippers.go:457] Response Status: 200 OK in 30 milliseconds
	I0811 00:54:00.402985 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:00.402991 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:00.402997 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:00.403000 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:00.403004 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:00.403008 1439337 round_trippers.go:463]     Content-Length: 291
	I0811 00:54:00.403011 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:00 GMT
	I0811 00:54:00.403042 1439337 request.go:1123] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"00410e73-f241-43ef-b4a8-7c53dde0739d","resourceVersion":"413","creationTimestamp":"2021-08-11T00:53:47Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0811 00:54:00.403741 1439337 request.go:1123] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"00410e73-f241-43ef-b4a8-7c53dde0739d","resourceVersion":"413","creationTimestamp":"2021-08-11T00:53:47Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0811 00:54:00.403787 1439337 round_trippers.go:432] PUT https://192.168.49.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0811 00:54:00.403794 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:00.403799 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:00.403803 1439337 round_trippers.go:442]     Content-Type: application/json
	I0811 00:54:00.403807 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:00.407774 1439337 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0811 00:54:00.407825 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:00.407837 1439337 round_trippers.go:463]     Content-Length: 291
	I0811 00:54:00.407842 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:00 GMT
	I0811 00:54:00.407845 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:00.407848 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:00.407852 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:00.407856 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:00.407874 1439337 request.go:1123] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"00410e73-f241-43ef-b4a8-7c53dde0739d","resourceVersion":"416","creationTimestamp":"2021-08-11T00:53:47Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0811 00:54:00.908742 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0811 00:54:00.908770 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:00.908776 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:00.908781 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:00.911151 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:00.911225 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:00.911237 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:00.911242 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:00.911247 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:00.911256 1439337 round_trippers.go:463]     Content-Length: 291
	I0811 00:54:00.911265 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:00 GMT
	I0811 00:54:00.911273 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:00.911306 1439337 request.go:1123] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"00410e73-f241-43ef-b4a8-7c53dde0739d","resourceVersion":"458","creationTimestamp":"2021-08-11T00:53:47Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0811 00:54:00.911392 1439337 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "multinode-20210811005307-1387367" rescaled to 1
	I0811 00:54:00.911468 1439337 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0811 00:54:00.913951 1439337 out.go:177] * Verifying Kubernetes components...
	I0811 00:54:00.911647 1439337 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0811 00:54:00.911807 1439337 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0811 00:54:00.914135 1439337 addons.go:59] Setting storage-provisioner=true in profile "multinode-20210811005307-1387367"
	I0811 00:54:00.914151 1439337 addons.go:135] Setting addon storage-provisioner=true in "multinode-20210811005307-1387367"
	W0811 00:54:00.914157 1439337 addons.go:147] addon storage-provisioner should already be in state true
	I0811 00:54:00.914182 1439337 host.go:66] Checking if "multinode-20210811005307-1387367" exists ...
	I0811 00:54:00.914722 1439337 cli_runner.go:115] Run: docker container inspect multinode-20210811005307-1387367 --format={{.State.Status}}
	I0811 00:54:00.914888 1439337 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0811 00:54:00.914954 1439337 addons.go:59] Setting default-storageclass=true in profile "multinode-20210811005307-1387367"
	I0811 00:54:00.914970 1439337 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-20210811005307-1387367"
	I0811 00:54:00.915356 1439337 cli_runner.go:115] Run: docker container inspect multinode-20210811005307-1387367 --format={{.State.Status}}
	I0811 00:54:00.972127 1439337 loader.go:372] Config loaded from file:  /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	I0811 00:54:00.972410 1439337 kapi.go:59] client config for multinode-20210811005307-1387367: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-202
10811005307-1387367/client.key", CAFile:"/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1115760), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0811 00:54:00.973770 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0811 00:54:00.973791 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:00.973796 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:00.973801 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:00.976125 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:00.976145 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:00.976149 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:00.976153 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:00.976157 1439337 round_trippers.go:463]     Content-Length: 109
	I0811 00:54:00.976160 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:00 GMT
	I0811 00:54:00.976164 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:00.976168 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:00.976186 1439337 request.go:1123] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"458"},"items":[]}
	I0811 00:54:00.976870 1439337 addons.go:135] Setting addon default-storageclass=true in "multinode-20210811005307-1387367"
	W0811 00:54:00.976891 1439337 addons.go:147] addon default-storageclass should already be in state true
	I0811 00:54:00.976915 1439337 host.go:66] Checking if "multinode-20210811005307-1387367" exists ...
	I0811 00:54:00.977467 1439337 cli_runner.go:115] Run: docker container inspect multinode-20210811005307-1387367 --format={{.State.Status}}
	I0811 00:54:01.014733 1439337 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0811 00:54:01.014853 1439337 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0811 00:54:01.014863 1439337 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0811 00:54:01.014930 1439337 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210811005307-1387367
	I0811 00:54:01.064966 1439337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50285 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210811005307-1387367/id_rsa Username:docker}
	I0811 00:54:01.075831 1439337 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0811 00:54:01.075856 1439337 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0811 00:54:01.075919 1439337 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210811005307-1387367
	I0811 00:54:01.124872 1439337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50285 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210811005307-1387367/id_rsa Username:docker}
	I0811 00:54:01.176492 1439337 command_runner.go:124] > apiVersion: v1
	I0811 00:54:01.176515 1439337 command_runner.go:124] > data:
	I0811 00:54:01.176520 1439337 command_runner.go:124] >   Corefile: |
	I0811 00:54:01.176525 1439337 command_runner.go:124] >     .:53 {
	I0811 00:54:01.176530 1439337 command_runner.go:124] >         errors
	I0811 00:54:01.176535 1439337 command_runner.go:124] >         health {
	I0811 00:54:01.176541 1439337 command_runner.go:124] >            lameduck 5s
	I0811 00:54:01.176545 1439337 command_runner.go:124] >         }
	I0811 00:54:01.176551 1439337 command_runner.go:124] >         ready
	I0811 00:54:01.176565 1439337 command_runner.go:124] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0811 00:54:01.176577 1439337 command_runner.go:124] >            pods insecure
	I0811 00:54:01.176584 1439337 command_runner.go:124] >            fallthrough in-addr.arpa ip6.arpa
	I0811 00:54:01.176595 1439337 command_runner.go:124] >            ttl 30
	I0811 00:54:01.176599 1439337 command_runner.go:124] >         }
	I0811 00:54:01.176610 1439337 command_runner.go:124] >         prometheus :9153
	I0811 00:54:01.176616 1439337 command_runner.go:124] >         forward . /etc/resolv.conf {
	I0811 00:54:01.176627 1439337 command_runner.go:124] >            max_concurrent 1000
	I0811 00:54:01.176632 1439337 command_runner.go:124] >         }
	I0811 00:54:01.176637 1439337 command_runner.go:124] >         cache 30
	I0811 00:54:01.176642 1439337 command_runner.go:124] >         loop
	I0811 00:54:01.176649 1439337 command_runner.go:124] >         reload
	I0811 00:54:01.176657 1439337 command_runner.go:124] >         loadbalance
	I0811 00:54:01.176669 1439337 command_runner.go:124] >     }
	I0811 00:54:01.176675 1439337 command_runner.go:124] > kind: ConfigMap
	I0811 00:54:01.176685 1439337 command_runner.go:124] > metadata:
	I0811 00:54:01.176700 1439337 command_runner.go:124] >   creationTimestamp: "2021-08-11T00:53:47Z"
	I0811 00:54:01.176708 1439337 command_runner.go:124] >   name: coredns
	I0811 00:54:01.176714 1439337 command_runner.go:124] >   namespace: kube-system
	I0811 00:54:01.176725 1439337 command_runner.go:124] >   resourceVersion: "267"
	I0811 00:54:01.176731 1439337 command_runner.go:124] >   uid: 43a5565a-2909-4945-9fef-505f8754f208
	I0811 00:54:01.179090 1439337 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0811 00:54:01.179534 1439337 loader.go:372] Config loaded from file:  /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	I0811 00:54:01.179818 1439337 kapi.go:59] client config for multinode-20210811005307-1387367: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-202
10811005307-1387367/client.key", CAFile:"/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1115760), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0811 00:54:01.181119 1439337 node_ready.go:35] waiting up to 6m0s for node "multinode-20210811005307-1387367" to be "Ready" ...
	I0811 00:54:01.181195 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:01.181208 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:01.181213 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:01.181218 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:01.183374 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:01.183392 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:01.183397 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:01.183401 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:01.183405 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:01.183412 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:01.183416 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:01 GMT
	I0811 00:54:01.183636 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:01.264182 1439337 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0811 00:54:01.288198 1439337 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0811 00:54:01.675248 1439337 command_runner.go:124] > configmap/coredns replaced
	I0811 00:54:01.685971 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:01.686036 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:01.686056 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:01.686073 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:01.688180 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:01.688238 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:01.688254 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:01.688268 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:01.688282 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:01.688310 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:01.688329 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:01 GMT
	I0811 00:54:01.688474 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:01.689842 1439337 start.go:736] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0811 00:54:01.764877 1439337 command_runner.go:124] > serviceaccount/storage-provisioner created
	I0811 00:54:01.773913 1439337 command_runner.go:124] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0811 00:54:01.784696 1439337 command_runner.go:124] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0811 00:54:01.791154 1439337 command_runner.go:124] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0811 00:54:01.800648 1439337 command_runner.go:124] > endpoints/k8s.io-minikube-hostpath created
	I0811 00:54:01.814770 1439337 command_runner.go:124] > pod/storage-provisioner created
	I0811 00:54:01.821224 1439337 command_runner.go:124] > storageclass.storage.k8s.io/standard created
	I0811 00:54:01.824320 1439337 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0811 00:54:01.824350 1439337 addons.go:344] enableAddons completed in 912.55125ms
	I0811 00:54:02.185368 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:02.185399 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:02.185407 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:02.185412 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:02.187600 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:02.187622 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:02.187627 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:02.187631 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:02.187635 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:02.187638 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:02.187642 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:02 GMT
	I0811 00:54:02.187757 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:02.685270 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:02.685295 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:02.685301 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:02.685306 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:02.687927 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:02.687977 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:02.687994 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:02.688010 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:02 GMT
	I0811 00:54:02.688026 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:02.688050 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:02.688067 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:02.688232 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:03.185505 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:03.185527 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:03.185534 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:03.185539 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:03.187798 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:03.187817 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:03.187823 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:03.187829 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:03.187833 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:03.187837 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:03 GMT
	I0811 00:54:03.187841 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:03.188193 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:03.188475 1439337 node_ready.go:58] node "multinode-20210811005307-1387367" has status "Ready":"False"
	I0811 00:54:03.685397 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:03.685455 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:03.685473 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:03.685489 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:03.688335 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:03.688352 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:03.688357 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:03.688361 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:03.688364 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:03.688368 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:03.688371 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:03 GMT
	I0811 00:54:03.688493 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:04.186230 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:04.186251 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:04.186257 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:04.186262 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:04.188256 1439337 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0811 00:54:04.188270 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:04.188275 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:04.188279 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:04.188282 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:04.188286 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:04.188289 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:04 GMT
	I0811 00:54:04.188415 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:04.685280 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:04.685305 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:04.685312 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:04.685317 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:04.688026 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:04.688077 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:04.688104 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:04.688119 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:04 GMT
	I0811 00:54:04.688132 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:04.688146 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:04.688167 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:04.688531 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:05.186172 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:05.186193 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:05.186199 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:05.186204 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:05.188344 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:05.188376 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:05.188382 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:05.188386 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:05.188390 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:05.188393 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:05.188397 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:05 GMT
	I0811 00:54:05.188539 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:05.188827 1439337 node_ready.go:58] node "multinode-20210811005307-1387367" has status "Ready":"False"
	I0811 00:54:05.686189 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:05.686210 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:05.686216 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:05.686221 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:05.688315 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:05.688393 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:05.688406 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:05.688411 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:05.688414 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:05.688419 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:05.688423 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:05 GMT
	I0811 00:54:05.688559 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:06.186102 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:06.186130 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:06.186137 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:06.186142 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:06.188328 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:06.188380 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:06.188397 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:06.188411 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:06.188425 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:06.188450 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:06.188468 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:06 GMT
	I0811 00:54:06.188621 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:06.686119 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:06.686150 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:06.686156 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:06.686161 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:06.688594 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:06.688641 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:06.688647 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:06 GMT
	I0811 00:54:06.688651 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:06.688655 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:06.688659 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:06.688662 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:06.688773 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:07.185540 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:07.185566 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:07.185573 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:07.185577 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:07.187644 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:07.187666 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:07.187671 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:07.187675 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:07.187679 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:07.187682 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:07.187685 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:07 GMT
	I0811 00:54:07.187801 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:07.685878 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:07.685912 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:07.685919 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:07.685924 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:07.688573 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:07.688619 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:07.688635 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:07.688652 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:07 GMT
	I0811 00:54:07.688666 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:07.688678 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:07.688706 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:07.688850 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:07.689144 1439337 node_ready.go:58] node "multinode-20210811005307-1387367" has status "Ready":"False"
	I0811 00:54:08.185304 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:08.185331 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:08.185338 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:08.185343 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:08.187419 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:08.187435 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:08.187440 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:08.187444 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:08.187447 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:08.187451 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:08.187457 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:08 GMT
	I0811 00:54:08.187605 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:08.685995 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:08.686024 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:08.686031 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:08.686037 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:08.688643 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:08.688661 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:08.688667 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:08 GMT
	I0811 00:54:08.688671 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:08.688674 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:08.688678 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:08.688681 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:08.688789 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:09.185573 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:09.185602 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:09.185609 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:09.185614 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:09.187695 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:09.187712 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:09.187717 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:09.187721 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:09.187725 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:09.187728 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:09.187732 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:09 GMT
	I0811 00:54:09.187895 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:09.686209 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:09.686236 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:09.686243 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:09.686250 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:09.688814 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:09.688831 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:09.688836 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:09.688840 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:09.688844 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:09.688847 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:09.688851 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:09 GMT
	I0811 00:54:09.688996 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:09.689299 1439337 node_ready.go:58] node "multinode-20210811005307-1387367" has status "Ready":"False"
	I0811 00:54:10.185268 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:10.185295 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:10.185302 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:10.185309 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:10.187416 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:10.187437 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:10.187442 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:10.187446 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:10.187450 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:10 GMT
	I0811 00:54:10.187453 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:10.187457 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:10.187603 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:10.685206 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:10.685237 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:10.685243 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:10.685248 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:10.687795 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:10.687811 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:10.687817 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:10.687820 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:10.687824 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:10.687828 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:10.687832 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:10 GMT
	I0811 00:54:10.688006 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:11.186048 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:11.186078 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:11.186084 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:11.186089 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:11.188176 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:11.188192 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:11.188198 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:11.188202 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:11.188205 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:11.188209 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:11.188212 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:11 GMT
	I0811 00:54:11.188329 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:11.686197 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:11.686228 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:11.686234 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:11.686239 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:11.688800 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:11.688817 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:11.688822 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:11.688828 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:11.688831 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:11.688835 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:11.688838 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:11 GMT
	I0811 00:54:11.688964 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:12.185849 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:12.185876 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:12.185882 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:12.185887 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:12.187878 1439337 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0811 00:54:12.187895 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:12.187900 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:12.187904 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:12.187907 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:12.187911 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:12 GMT
	I0811 00:54:12.187915 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:12.188039 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:12.188306 1439337 node_ready.go:58] node "multinode-20210811005307-1387367" has status "Ready":"False"
	I0811 00:54:12.686007 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:12.686035 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:12.686041 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:12.686046 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:12.688616 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:12.688632 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:12.688637 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:12.688641 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:12.688644 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:12.688648 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:12.688652 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:12 GMT
	I0811 00:54:12.688760 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:13.185263 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:13.185292 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:13.185298 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:13.185303 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:13.187308 1439337 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0811 00:54:13.187323 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:13.187328 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:13.187332 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:13.187335 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:13.187339 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:13 GMT
	I0811 00:54:13.187342 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:13.187459 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:13.686227 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:13.686260 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:13.686266 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:13.686271 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:13.688906 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:13.688923 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:13.688928 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:13.688933 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:13.688937 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:13.688940 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:13 GMT
	I0811 00:54:13.688944 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:13.689086 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:14.186050 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:14.186081 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:14.186087 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:14.186092 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:14.188076 1439337 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0811 00:54:14.188096 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:14.188101 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:14.188105 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:14.188108 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:14 GMT
	I0811 00:54:14.188111 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:14.188114 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:14.188217 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:14.188485 1439337 node_ready.go:58] node "multinode-20210811005307-1387367" has status "Ready":"False"
	I0811 00:54:14.686231 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:14.686260 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:14.686267 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:14.686272 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:14.688989 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:14.689006 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:14.689027 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:14.689031 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:14.689034 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:14.689037 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:14.689041 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:14 GMT
	I0811 00:54:14.689225 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:15.185896 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:15.185925 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:15.185931 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:15.185936 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:15.187997 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:15.188018 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:15.188023 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:15.188027 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:15.188031 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:15.188034 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:15.188038 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:15 GMT
	I0811 00:54:15.188140 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:15.686119 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:15.686151 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:15.686158 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:15.686163 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:15.687819 1439337 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0811 00:54:15.687834 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:15.687839 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:15.687843 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:15.687847 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:15 GMT
	I0811 00:54:15.687850 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:15.687854 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:15.687968 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:16.185920 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:16.185950 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:16.185956 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:16.185962 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:16.187928 1439337 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0811 00:54:16.187946 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:16.187951 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:16.187955 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:16.187960 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:16.187964 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:16.187967 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:16 GMT
	I0811 00:54:16.188140 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:16.685275 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:16.685310 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:16.685316 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:16.685321 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:16.687618 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:16.687633 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:16.687638 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:16.687642 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:16.687646 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:16.687650 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:16.687653 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:16 GMT
	I0811 00:54:16.687768 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:16.688048 1439337 node_ready.go:58] node "multinode-20210811005307-1387367" has status "Ready":"False"
	I0811 00:54:17.186023 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:17.186048 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:17.186054 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:17.186059 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:17.188141 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:17.188156 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:17.188162 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:17.188165 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:17.188169 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:17.188173 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:17.188177 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:17 GMT
	I0811 00:54:17.188315 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:17.685962 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:17.685991 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:17.685998 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:17.686003 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:17.688663 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:17.688680 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:17.688685 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:17.688689 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:17.688693 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:17.688697 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:17 GMT
	I0811 00:54:17.688700 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:17.688863 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:18.185695 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:18.185724 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:18.185730 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:18.185735 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:18.187768 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:18.187784 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:18.187789 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:18.187792 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:18.187796 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:18.187799 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:18 GMT
	I0811 00:54:18.187803 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:18.187928 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:18.685928 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:18.685960 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:18.685967 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:18.685972 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:18.688580 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:18.688600 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:18.688605 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:18.688609 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:18.688612 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:18.688616 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:18.688620 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:18 GMT
	I0811 00:54:18.688912 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:18.689210 1439337 node_ready.go:58] node "multinode-20210811005307-1387367" has status "Ready":"False"
	I0811 00:54:19.186204 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:19.186231 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:19.186237 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:19.186242 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:19.188215 1439337 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0811 00:54:19.188234 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:19.188238 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:19.188242 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:19 GMT
	I0811 00:54:19.188246 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:19.188251 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:19.188255 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:19.188360 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:19.686225 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:19.686256 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:19.686263 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:19.686268 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:19.688859 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:19.688877 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:19.688882 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:19.688885 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:19.688889 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:19 GMT
	I0811 00:54:19.688893 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:19.688896 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:19.689060 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:20.186217 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:20.186244 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:20.186250 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:20.186255 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:20.188291 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:20.188308 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:20.188313 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:20.188319 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:20 GMT
	I0811 00:54:20.188322 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:20.188326 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:20.188329 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:20.188470 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:20.686250 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:20.686280 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:20.686287 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:20.686292 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:20.689061 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:20.689078 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:20.689084 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:20.689088 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:20 GMT
	I0811 00:54:20.689092 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:20.689095 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:20.689098 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:20.689210 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:20.689466 1439337 node_ready.go:58] node "multinode-20210811005307-1387367" has status "Ready":"False"
	I0811 00:54:21.186161 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:21.186188 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:21.186194 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:21.186199 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:21.188184 1439337 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0811 00:54:21.188199 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:21.188204 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:21.188208 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:21.188211 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:21.188215 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:21.188218 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:21 GMT
	I0811 00:54:21.188339 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"391","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5240 chars]
	I0811 00:54:21.686234 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:21.686260 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:21.686266 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:21.686270 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:21.688494 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:21.688515 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:21.688521 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:21.688525 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:21 GMT
	I0811 00:54:21.688530 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:21.688534 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:21.688537 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:21.688644 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"498","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5272 chars]
	I0811 00:54:21.688903 1439337 node_ready.go:49] node "multinode-20210811005307-1387367" has status "Ready":"True"
	I0811 00:54:21.688919 1439337 node_ready.go:38] duration metric: took 20.507776617s waiting for node "multinode-20210811005307-1387367" to be "Ready" ...
	I0811 00:54:21.688929 1439337 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0811 00:54:21.689043 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0811 00:54:21.689058 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:21.689063 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:21.689069 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:21.691889 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:21.691954 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:21.691964 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:21.691968 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:21 GMT
	I0811 00:54:21.691972 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:21.691982 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:21.691988 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:21.692574 1439337 request.go:1123] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"499"},"items":[{"metadata":{"name":"coredns-558bd4d5db-lpxc6","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"839d8a5e-9cef-4c9e-a07f-db7f529aaa6a","resourceVersion":"449","creationTimestamp":"2021-08-11T00:54:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"d2f4d58c-c53b-4251-9422-50d7bb166f11","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d2f4d58c-c53b-4251-9422-50d7bb166f11\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller"
:{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k: [truncated 52501 chars]
	I0811 00:54:21.699616 1439337 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-lpxc6" in "kube-system" namespace to be "Ready" ...
	I0811 00:54:21.699699 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-lpxc6
	I0811 00:54:21.699712 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:21.699720 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:21.699730 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:21.701575 1439337 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0811 00:54:21.701614 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:21.701630 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:21.701651 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:21.701655 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:21.701658 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:21 GMT
	I0811 00:54:21.701662 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:21.701759 1439337 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-lpxc6","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"839d8a5e-9cef-4c9e-a07f-db7f529aaa6a","resourceVersion":"449","creationTimestamp":"2021-08-11T00:54:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"d2f4d58c-c53b-4251-9422-50d7bb166f11","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d2f4d58c-c53b-4251-9422-50d7bb166f11\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 4686 chars]
	I0811 00:54:22.208179 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-lpxc6
	I0811 00:54:22.208208 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:22.208214 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:22.208218 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:22.210474 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:22.210523 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:22.210539 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:22.210552 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:22.210566 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:22.210579 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:22.210603 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:22 GMT
	I0811 00:54:22.210725 1439337 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-lpxc6","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"839d8a5e-9cef-4c9e-a07f-db7f529aaa6a","resourceVersion":"449","creationTimestamp":"2021-08-11T00:54:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"d2f4d58c-c53b-4251-9422-50d7bb166f11","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d2f4d58c-c53b-4251-9422-50d7bb166f11\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 4686 chars]
	I0811 00:54:22.708569 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-lpxc6
	I0811 00:54:22.708600 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:22.708606 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:22.708611 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:22.711363 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:22.711422 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:22.711435 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:22.711440 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:22.711443 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:22.711447 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:22.711450 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:22 GMT
	I0811 00:54:22.711577 1439337 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-lpxc6","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"839d8a5e-9cef-4c9e-a07f-db7f529aaa6a","resourceVersion":"449","creationTimestamp":"2021-08-11T00:54:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"d2f4d58c-c53b-4251-9422-50d7bb166f11","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d2f4d58c-c53b-4251-9422-50d7bb166f11\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 4686 chars]
	I0811 00:54:23.208184 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-lpxc6
	I0811 00:54:23.208211 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:23.208217 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:23.208222 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:23.210427 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:23.210449 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:23.210454 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:23.210457 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:23.210461 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:23.210464 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:23.210470 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:23 GMT
	I0811 00:54:23.210587 1439337 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-lpxc6","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"839d8a5e-9cef-4c9e-a07f-db7f529aaa6a","resourceVersion":"449","creationTimestamp":"2021-08-11T00:54:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"d2f4d58c-c53b-4251-9422-50d7bb166f11","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d2f4d58c-c53b-4251-9422-50d7bb166f11\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 4686 chars]
	I0811 00:54:23.708681 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-lpxc6
	I0811 00:54:23.708711 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:23.708719 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:23.708724 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:23.711409 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:23.711466 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:23.711483 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:23.711497 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:23.711510 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:23.711523 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:23.711555 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:23 GMT
	I0811 00:54:23.711706 1439337 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-lpxc6","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"839d8a5e-9cef-4c9e-a07f-db7f529aaa6a","resourceVersion":"449","creationTimestamp":"2021-08-11T00:54:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"d2f4d58c-c53b-4251-9422-50d7bb166f11","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d2f4d58c-c53b-4251-9422-50d7bb166f11\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 4686 chars]
	I0811 00:54:23.712060 1439337 pod_ready.go:102] pod "coredns-558bd4d5db-lpxc6" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-08-11 00:54:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0811 00:54:24.207851 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-lpxc6
	I0811 00:54:24.207877 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:24.207883 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:24.207888 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:24.210090 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:24.210136 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:24.210149 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:24.210153 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:24.210156 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:24 GMT
	I0811 00:54:24.210172 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:24.210176 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:24.210291 1439337 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-lpxc6","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"839d8a5e-9cef-4c9e-a07f-db7f529aaa6a","resourceVersion":"449","creationTimestamp":"2021-08-11T00:54:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"d2f4d58c-c53b-4251-9422-50d7bb166f11","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d2f4d58c-c53b-4251-9422-50d7bb166f11\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 4686 chars]
	I0811 00:54:24.708758 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-lpxc6
	I0811 00:54:24.708787 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:24.708794 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:24.708798 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:24.711466 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:24.711511 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:24.711528 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:24.711542 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:24 GMT
	I0811 00:54:24.711585 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:24.711607 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:24.711621 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:24.711741 1439337 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-lpxc6","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"839d8a5e-9cef-4c9e-a07f-db7f529aaa6a","resourceVersion":"449","creationTimestamp":"2021-08-11T00:54:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"d2f4d58c-c53b-4251-9422-50d7bb166f11","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d2f4d58c-c53b-4251-9422-50d7bb166f11\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 4686 chars]
	I0811 00:54:25.208776 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-lpxc6
	I0811 00:54:25.208805 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:25.208812 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:25.208817 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:25.211061 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:25.211080 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:25.211085 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:25.211089 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:25.211111 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:25.211115 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:25.211120 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:25 GMT
	I0811 00:54:25.211238 1439337 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-lpxc6","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"839d8a5e-9cef-4c9e-a07f-db7f529aaa6a","resourceVersion":"449","creationTimestamp":"2021-08-11T00:54:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"d2f4d58c-c53b-4251-9422-50d7bb166f11","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d2f4d58c-c53b-4251-9422-50d7bb166f11\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 4686 chars]
	I0811 00:54:25.708828 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-lpxc6
	I0811 00:54:25.708859 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:25.708865 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:25.708870 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:25.710933 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:25.710979 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:25.710996 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:25.711010 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:25 GMT
	I0811 00:54:25.711023 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:25.711036 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:25.711058 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:25.711233 1439337 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-lpxc6","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"839d8a5e-9cef-4c9e-a07f-db7f529aaa6a","resourceVersion":"449","creationTimestamp":"2021-08-11T00:54:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"d2f4d58c-c53b-4251-9422-50d7bb166f11","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d2f4d58c-c53b-4251-9422-50d7bb166f11\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 4686 chars]
	I0811 00:54:26.207820 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-lpxc6
	I0811 00:54:26.207851 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:26.207857 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:26.207862 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:26.210007 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:26.210025 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:26.210031 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:26.210034 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:26.210038 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:26.210041 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:26.210045 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:26 GMT
	I0811 00:54:26.210158 1439337 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-lpxc6","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"839d8a5e-9cef-4c9e-a07f-db7f529aaa6a","resourceVersion":"449","creationTimestamp":"2021-08-11T00:54:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"d2f4d58c-c53b-4251-9422-50d7bb166f11","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d2f4d58c-c53b-4251-9422-50d7bb166f11\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 4686 chars]
	I0811 00:54:26.210521 1439337 pod_ready.go:102] pod "coredns-558bd4d5db-lpxc6" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-08-11 00:54:00 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0811 00:54:26.708200 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-lpxc6
	I0811 00:54:26.708225 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:26.708231 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:26.708237 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:26.710822 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:26.710844 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:26.710849 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:26.710853 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:26.710856 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:26.710862 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:26.710866 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:26 GMT
	I0811 00:54:26.711099 1439337 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-lpxc6","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"839d8a5e-9cef-4c9e-a07f-db7f529aaa6a","resourceVersion":"506","creationTimestamp":"2021-08-11T00:54:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"d2f4d58c-c53b-4251-9422-50d7bb166f11","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d2f4d58c-c53b-4251-9422-50d7bb166f11\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5944 chars]
	I0811 00:54:26.711513 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:26.711532 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:26.711537 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:26.711543 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:26.713286 1439337 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0811 00:54:26.713302 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:26.713306 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:26.713310 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:26.713314 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:26.713317 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:26.713321 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:26 GMT
	I0811 00:54:26.713822 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"498","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5272 chars]
	I0811 00:54:27.208086 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-lpxc6
	I0811 00:54:27.208119 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:27.208125 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:27.208130 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:27.214523 1439337 round_trippers.go:457] Response Status: 200 OK in 6 milliseconds
	I0811 00:54:27.214544 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:27.214549 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:27 GMT
	I0811 00:54:27.214553 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:27.214557 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:27.214560 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:27.214564 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:27.214752 1439337 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-lpxc6","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"839d8a5e-9cef-4c9e-a07f-db7f529aaa6a","resourceVersion":"506","creationTimestamp":"2021-08-11T00:54:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"d2f4d58c-c53b-4251-9422-50d7bb166f11","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d2f4d58c-c53b-4251-9422-50d7bb166f11\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5944 chars]
	I0811 00:54:27.215175 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:27.215197 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:27.215203 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:27.215208 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:27.217286 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:27.217303 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:27.217309 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:27.217313 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:27 GMT
	I0811 00:54:27.217316 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:27.217320 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:27.217324 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:27.217615 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"498","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5272 chars]
	I0811 00:54:27.708216 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-lpxc6
	I0811 00:54:27.708247 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:27.708256 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:27.708261 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:27.711039 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:27.711061 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:27.711067 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:27.711070 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:27.711075 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:27.711081 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:27.711085 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:27 GMT
	I0811 00:54:27.711278 1439337 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-lpxc6","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"839d8a5e-9cef-4c9e-a07f-db7f529aaa6a","resourceVersion":"518","creationTimestamp":"2021-08-11T00:54:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"d2f4d58c-c53b-4251-9422-50d7bb166f11","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d2f4d58c-c53b-4251-9422-50d7bb166f11\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 6071 chars]
	I0811 00:54:27.711669 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:27.711686 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:27.711692 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:27.711696 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:27.713719 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:27.713754 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:27.713759 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:27.713764 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:27.713767 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:27.713772 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:27 GMT
	I0811 00:54:27.713786 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:27.713899 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"498","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5272 chars]
	I0811 00:54:27.714183 1439337 pod_ready.go:92] pod "coredns-558bd4d5db-lpxc6" in "kube-system" namespace has status "Ready":"True"
	I0811 00:54:27.714209 1439337 pod_ready.go:81] duration metric: took 6.01455566s waiting for pod "coredns-558bd4d5db-lpxc6" in "kube-system" namespace to be "Ready" ...
	I0811 00:54:27.714225 1439337 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-20210811005307-1387367" in "kube-system" namespace to be "Ready" ...
	I0811 00:54:27.714283 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210811005307-1387367
	I0811 00:54:27.714294 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:27.714300 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:27.714304 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:27.716083 1439337 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0811 00:54:27.716099 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:27.716104 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:27.716108 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:27.716111 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:27.716115 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:27.716118 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:27 GMT
	I0811 00:54:27.716248 1439337 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210811005307-1387367","namespace":"kube-system","uid":"b98555c3-d9ce-452c-a2de-7ee50a50311d","resourceVersion":"459","creationTimestamp":"2021-08-11T00:53:51Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"70ae736662f600440da0a55cde86b0f8","kubernetes.io/config.mirror":"70ae736662f600440da0a55cde86b0f8","kubernetes.io/config.seen":"2021-08-11T00:53:47.643869676Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5588 chars]
	I0811 00:54:27.716542 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:27.716557 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:27.716562 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:27.716567 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:27.718079 1439337 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0811 00:54:27.718095 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:27.718100 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:27.718103 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:27.718108 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:27.718111 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:27 GMT
	I0811 00:54:27.718116 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:27.718371 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"498","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5272 chars]
	I0811 00:54:27.718616 1439337 pod_ready.go:92] pod "etcd-multinode-20210811005307-1387367" in "kube-system" namespace has status "Ready":"True"
	I0811 00:54:27.718632 1439337 pod_ready.go:81] duration metric: took 4.395624ms waiting for pod "etcd-multinode-20210811005307-1387367" in "kube-system" namespace to be "Ready" ...
	I0811 00:54:27.718648 1439337 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-20210811005307-1387367" in "kube-system" namespace to be "Ready" ...
	I0811 00:54:27.718696 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20210811005307-1387367
	I0811 00:54:27.718707 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:27.718712 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:27.718719 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:27.720419 1439337 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0811 00:54:27.720436 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:27.720441 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:27 GMT
	I0811 00:54:27.720445 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:27.720448 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:27.720451 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:27.720455 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:27.720608 1439337 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20210811005307-1387367","namespace":"kube-system","uid":"520b1e32-479d-4e0e-8867-276c958ae125","resourceVersion":"460","creationTimestamp":"2021-08-11T00:53:45Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8443","kubernetes.io/config.hash":"74969952953b6d01bc2817560a3e688d","kubernetes.io/config.mirror":"74969952953b6d01bc2817560a3e688d","kubernetes.io/config.seen":"2021-08-11T00:53:31.835501949Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-addr [truncated 8113 chars]
	I0811 00:54:27.720983 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:27.720995 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:27.721001 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:27.721034 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:27.722659 1439337 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0811 00:54:27.722699 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:27.722715 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:27.722730 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:27.722743 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:27 GMT
	I0811 00:54:27.722757 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:27.722782 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:27.722909 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"498","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5272 chars]
	I0811 00:54:27.723192 1439337 pod_ready.go:92] pod "kube-apiserver-multinode-20210811005307-1387367" in "kube-system" namespace has status "Ready":"True"
	I0811 00:54:27.723210 1439337 pod_ready.go:81] duration metric: took 4.552898ms waiting for pod "kube-apiserver-multinode-20210811005307-1387367" in "kube-system" namespace to be "Ready" ...
	I0811 00:54:27.723222 1439337 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-20210811005307-1387367" in "kube-system" namespace to be "Ready" ...
	I0811 00:54:27.723279 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20210811005307-1387367
	I0811 00:54:27.723291 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:27.723296 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:27.723302 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:27.725128 1439337 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0811 00:54:27.725150 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:27.725155 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:27.725158 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:27.725162 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:27.725176 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:27.725182 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:27 GMT
	I0811 00:54:27.725281 1439337 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20210811005307-1387367","namespace":"kube-system","uid":"f0ca8783-2ede-4c80-adc7-94aa58a85ad1","resourceVersion":"462","creationTimestamp":"2021-08-11T00:53:45Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"cfbf57d2192b91a488c5172bd9546eeb","kubernetes.io/config.mirror":"cfbf57d2192b91a488c5172bd9546eeb","kubernetes.io/config.seen":"2021-08-11T00:53:31.835503352Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/c
onfig.mirror":{},"f:kubernetes.io/config.seen":{},"f:kubernetes.io/conf [truncated 7679 chars]
	I0811 00:54:27.725659 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:27.725675 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:27.725681 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:27.725685 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:27.727299 1439337 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0811 00:54:27.727315 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:27.727320 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:27.727323 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:27.727326 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:27.727330 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:27.727333 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:27 GMT
	I0811 00:54:27.727584 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"498","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5272 chars]
	I0811 00:54:27.727836 1439337 pod_ready.go:92] pod "kube-controller-manager-multinode-20210811005307-1387367" in "kube-system" namespace has status "Ready":"True"
	I0811 00:54:27.727852 1439337 pod_ready.go:81] duration metric: took 4.621666ms waiting for pod "kube-controller-manager-multinode-20210811005307-1387367" in "kube-system" namespace to be "Ready" ...
	I0811 00:54:27.727863 1439337 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sjx8s" in "kube-system" namespace to be "Ready" ...
	I0811 00:54:27.727915 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sjx8s
	I0811 00:54:27.727926 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:27.727930 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:27.727935 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:27.729698 1439337 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0811 00:54:27.729726 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:27.729731 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:27.729737 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:27.729751 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:27.729762 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:27.729766 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:27 GMT
	I0811 00:54:27.730096 1439337 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-sjx8s","generateName":"kube-proxy-","namespace":"kube-system","uid":"b7a97e6a-09fd-4f56-9ee7-9ebd40c689f7","resourceVersion":"482","creationTimestamp":"2021-08-11T00:54:00Z","labels":{"controller-revision-hash":"7cdcb64568","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"37aa45af-7498-4003-abc1-af1fe65a80b1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37aa45af-7498-4003-abc1-af1fe65a80b1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller
":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:affinity":{".": [truncated 5777 chars]
	I0811 00:54:27.730435 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:27.730453 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:27.730459 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:27.730464 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:27.732310 1439337 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0811 00:54:27.732348 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:27.732365 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:27.732392 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:27.732412 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:27.732428 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:27.732441 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:27 GMT
	I0811 00:54:27.732576 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"498","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5272 chars]
	I0811 00:54:27.732838 1439337 pod_ready.go:92] pod "kube-proxy-sjx8s" in "kube-system" namespace has status "Ready":"True"
	I0811 00:54:27.732851 1439337 pod_ready.go:81] duration metric: took 4.977569ms waiting for pod "kube-proxy-sjx8s" in "kube-system" namespace to be "Ready" ...
	I0811 00:54:27.732861 1439337 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-20210811005307-1387367" in "kube-system" namespace to be "Ready" ...
	I0811 00:54:27.909224 1439337 request.go:600] Waited for 176.303519ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20210811005307-1387367
	I0811 00:54:27.909312 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20210811005307-1387367
	I0811 00:54:27.909362 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:27.909376 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:27.909381 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:27.911653 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:27.911700 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:27.911716 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:27.911731 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:27.911745 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:27.911759 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:27.911782 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:27 GMT
	I0811 00:54:27.912027 1439337 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-20210811005307-1387367","namespace":"kube-system","uid":"7a24d14d-4566-4ab3-a237-634064615837","resourceVersion":"476","creationTimestamp":"2021-08-11T00:53:51Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"215965f927d1bdc023cfbcf159bba72a","kubernetes.io/config.mirror":"215965f927d1bdc023cfbcf159bba72a","kubernetes.io/config.seen":"2021-08-11T00:53:47.643889688Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"
f:kubernetes.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f: [truncated 4561 chars]
	I0811 00:54:28.108661 1439337 request.go:600] Waited for 196.314258ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:28.108727 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:54:28.108736 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:28.108742 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:28.108749 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:28.111360 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:28.111379 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:28.111384 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:28.111388 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:28.111435 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:28.111445 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:28.111448 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:28 GMT
	I0811 00:54:28.111532 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"498","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5272 chars]
	I0811 00:54:28.111814 1439337 pod_ready.go:92] pod "kube-scheduler-multinode-20210811005307-1387367" in "kube-system" namespace has status "Ready":"True"
	I0811 00:54:28.111828 1439337 pod_ready.go:81] duration metric: took 378.95647ms waiting for pod "kube-scheduler-multinode-20210811005307-1387367" in "kube-system" namespace to be "Ready" ...
	I0811 00:54:28.111840 1439337 pod_ready.go:38] duration metric: took 6.422878791s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0811 00:54:28.111860 1439337 api_server.go:50] waiting for apiserver process to appear ...
	I0811 00:54:28.111912 1439337 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0811 00:54:28.126199 1439337 command_runner.go:124] > 1962
	I0811 00:54:28.126233 1439337 api_server.go:70] duration metric: took 27.214733619s to wait for apiserver process to appear ...
	I0811 00:54:28.126241 1439337 api_server.go:86] waiting for apiserver healthz status ...
	I0811 00:54:28.126267 1439337 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0811 00:54:28.134963 1439337 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0811 00:54:28.135056 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/version?timeout=32s
	I0811 00:54:28.135067 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:28.135072 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:28.135089 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:28.135850 1439337 round_trippers.go:457] Response Status: 200 OK in 0 milliseconds
	I0811 00:54:28.135866 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:28.135871 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:28.135875 1439337 round_trippers.go:463]     Content-Length: 263
	I0811 00:54:28.135878 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:28 GMT
	I0811 00:54:28.135881 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:28.135885 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:28.135896 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:28.135925 1439337 request.go:1123] Response Body: {
	  "major": "1",
	  "minor": "21",
	  "gitVersion": "v1.21.3",
	  "gitCommit": "ca643a4d1f7bfe34773c74f79527be4afd95bf39",
	  "gitTreeState": "clean",
	  "buildDate": "2021-07-15T20:59:07Z",
	  "goVersion": "go1.16.6",
	  "compiler": "gc",
	  "platform": "linux/arm64"
	}
	I0811 00:54:28.136017 1439337 api_server.go:139] control plane version: v1.21.3
	I0811 00:54:28.136032 1439337 api_server.go:129] duration metric: took 9.786549ms to wait for apiserver health ...
	I0811 00:54:28.136039 1439337 system_pods.go:43] waiting for kube-system pods to appear ...
	I0811 00:54:28.308286 1439337 request.go:600] Waited for 172.183028ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0811 00:54:28.308385 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0811 00:54:28.308401 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:28.308431 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:28.308445 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:28.311908 1439337 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0811 00:54:28.311975 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:28.311991 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:28.312037 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:28.312057 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:28.312072 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:28.312117 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:28 GMT
	I0811 00:54:28.312640 1439337 request.go:1123] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"523"},"items":[{"metadata":{"name":"coredns-558bd4d5db-lpxc6","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"839d8a5e-9cef-4c9e-a07f-db7f529aaa6a","resourceVersion":"518","creationTimestamp":"2021-08-11T00:54:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"d2f4d58c-c53b-4251-9422-50d7bb166f11","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d2f4d58c-c53b-4251-9422-50d7bb166f11\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller"
:{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k: [truncated 55311 chars]
	I0811 00:54:28.314241 1439337 system_pods.go:59] 8 kube-system pods found
	I0811 00:54:28.314277 1439337 system_pods.go:61] "coredns-558bd4d5db-lpxc6" [839d8a5e-9cef-4c9e-a07f-db7f529aaa6a] Running
	I0811 00:54:28.314286 1439337 system_pods.go:61] "etcd-multinode-20210811005307-1387367" [b98555c3-d9ce-452c-a2de-7ee50a50311d] Running
	I0811 00:54:28.314294 1439337 system_pods.go:61] "kindnet-xqj59" [5b61604f-90bf-41cc-9637-18fe68a7551c] Running
	I0811 00:54:28.314300 1439337 system_pods.go:61] "kube-apiserver-multinode-20210811005307-1387367" [520b1e32-479d-4e0e-8867-276c958ae125] Running
	I0811 00:54:28.314305 1439337 system_pods.go:61] "kube-controller-manager-multinode-20210811005307-1387367" [f0ca8783-2ede-4c80-adc7-94aa58a85ad1] Running
	I0811 00:54:28.314317 1439337 system_pods.go:61] "kube-proxy-sjx8s" [b7a97e6a-09fd-4f56-9ee7-9ebd40c689f7] Running
	I0811 00:54:28.314322 1439337 system_pods.go:61] "kube-scheduler-multinode-20210811005307-1387367" [7a24d14d-4566-4ab3-a237-634064615837] Running
	I0811 00:54:28.314327 1439337 system_pods.go:61] "storage-provisioner" [3e157891-c819-4c4a-8e4d-da074ce5a161] Running
	I0811 00:54:28.314334 1439337 system_pods.go:74] duration metric: took 178.290592ms to wait for pod list to return data ...
	I0811 00:54:28.314346 1439337 default_sa.go:34] waiting for default service account to be created ...
	I0811 00:54:28.508685 1439337 request.go:600] Waited for 194.27185ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0811 00:54:28.508769 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0811 00:54:28.508820 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:28.508833 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:28.508839 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:28.511273 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:28.511298 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:28.511303 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:28.511307 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:28.511310 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:28.511314 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:28.511317 1439337 round_trippers.go:463]     Content-Length: 304
	I0811 00:54:28.511320 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:28 GMT
	I0811 00:54:28.511340 1439337 request.go:1123] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"523"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"5d93c483-d14d-4998-a058-1bf4f42a56a6","resourceVersion":"405","creationTimestamp":"2021-08-11T00:54:00Z"},"secrets":[{"name":"default-token-zjkv2"}]}]}
	I0811 00:54:28.512160 1439337 default_sa.go:45] found service account: "default"
	I0811 00:54:28.512183 1439337 default_sa.go:55] duration metric: took 197.829129ms for default service account to be created ...
	I0811 00:54:28.512191 1439337 system_pods.go:116] waiting for k8s-apps to be running ...
	I0811 00:54:28.708553 1439337 request.go:600] Waited for 196.2924ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0811 00:54:28.708626 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0811 00:54:28.708640 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:28.708646 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:28.708651 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:28.712159 1439337 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0811 00:54:28.712221 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:28.712239 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:28 GMT
	I0811 00:54:28.712253 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:28.712268 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:28.712296 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:28.712315 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:28.712875 1439337 request.go:1123] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"523"},"items":[{"metadata":{"name":"coredns-558bd4d5db-lpxc6","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"839d8a5e-9cef-4c9e-a07f-db7f529aaa6a","resourceVersion":"518","creationTimestamp":"2021-08-11T00:54:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"d2f4d58c-c53b-4251-9422-50d7bb166f11","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d2f4d58c-c53b-4251-9422-50d7bb166f11\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller"
:{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k: [truncated 55311 chars]
	I0811 00:54:28.714522 1439337 system_pods.go:86] 8 kube-system pods found
	I0811 00:54:28.714545 1439337 system_pods.go:89] "coredns-558bd4d5db-lpxc6" [839d8a5e-9cef-4c9e-a07f-db7f529aaa6a] Running
	I0811 00:54:28.714552 1439337 system_pods.go:89] "etcd-multinode-20210811005307-1387367" [b98555c3-d9ce-452c-a2de-7ee50a50311d] Running
	I0811 00:54:28.714561 1439337 system_pods.go:89] "kindnet-xqj59" [5b61604f-90bf-41cc-9637-18fe68a7551c] Running
	I0811 00:54:28.714567 1439337 system_pods.go:89] "kube-apiserver-multinode-20210811005307-1387367" [520b1e32-479d-4e0e-8867-276c958ae125] Running
	I0811 00:54:28.714581 1439337 system_pods.go:89] "kube-controller-manager-multinode-20210811005307-1387367" [f0ca8783-2ede-4c80-adc7-94aa58a85ad1] Running
	I0811 00:54:28.714587 1439337 system_pods.go:89] "kube-proxy-sjx8s" [b7a97e6a-09fd-4f56-9ee7-9ebd40c689f7] Running
	I0811 00:54:28.714597 1439337 system_pods.go:89] "kube-scheduler-multinode-20210811005307-1387367" [7a24d14d-4566-4ab3-a237-634064615837] Running
	I0811 00:54:28.714602 1439337 system_pods.go:89] "storage-provisioner" [3e157891-c819-4c4a-8e4d-da074ce5a161] Running
	I0811 00:54:28.714613 1439337 system_pods.go:126] duration metric: took 202.414183ms to wait for k8s-apps to be running ...
	I0811 00:54:28.714623 1439337 system_svc.go:44] waiting for kubelet service to be running ....
	I0811 00:54:28.714676 1439337 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0811 00:54:28.724225 1439337 system_svc.go:56] duration metric: took 9.596075ms WaitForService to wait for kubelet.
	I0811 00:54:28.724251 1439337 kubeadm.go:547] duration metric: took 27.812752383s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0811 00:54:28.724296 1439337 node_conditions.go:102] verifying NodePressure condition ...
	I0811 00:54:28.908667 1439337 request.go:600] Waited for 184.299211ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0811 00:54:28.908723 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes
	I0811 00:54:28.908735 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:28.908744 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:28.908749 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:28.911429 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:28.911447 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:28.911452 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:28.911455 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:28.911459 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:28 GMT
	I0811 00:54:28.911463 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:28.911466 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:28.911564 1439337 request.go:1123] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"523"},"items":[{"metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"498","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-
managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","o [truncated 5325 chars]
	I0811 00:54:28.912878 1439337 node_conditions.go:122] node storage ephemeral capacity is 60796312Ki
	I0811 00:54:28.912911 1439337 node_conditions.go:123] node cpu capacity is 2
	I0811 00:54:28.912924 1439337 node_conditions.go:105] duration metric: took 188.622187ms to run NodePressure ...
	I0811 00:54:28.912932 1439337 start.go:231] waiting for startup goroutines ...
	I0811 00:54:28.916118 1439337 out.go:177] 
	I0811 00:54:28.916404 1439337 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/config.json ...
	I0811 00:54:28.918950 1439337 out.go:177] * Starting node multinode-20210811005307-1387367-m02 in cluster multinode-20210811005307-1387367
	I0811 00:54:28.918979 1439337 cache.go:117] Beginning downloading kic base image for docker with docker
	I0811 00:54:28.921857 1439337 out.go:177] * Pulling base image ...
	I0811 00:54:28.921885 1439337 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime docker
	I0811 00:54:28.921897 1439337 cache.go:56] Caching tarball of preloaded images
	I0811 00:54:28.921950 1439337 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	I0811 00:54:28.922196 1439337 preload.go:173] Found /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0811 00:54:28.922224 1439337 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on docker
	I0811 00:54:28.922340 1439337 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/config.json ...
	I0811 00:54:28.973092 1439337 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0811 00:54:28.973120 1439337 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0811 00:54:28.973136 1439337 cache.go:205] Successfully downloaded all kic artifacts
	I0811 00:54:28.973168 1439337 start.go:313] acquiring machines lock for multinode-20210811005307-1387367-m02: {Name:mkd6e705422cef7ce7e260ef11f9e40cbb420b68 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0811 00:54:28.973822 1439337 start.go:317] acquired machines lock for "multinode-20210811005307-1387367-m02" in 627.188µs
	I0811 00:54:28.973860 1439337 start.go:89] Provisioning new machine with config: &{Name:multinode-20210811005307-1387367 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:multinode-20210811005307-1387367 Namespace:default APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.21.3 ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0} &{Name:m02 IP: Port:0 KubernetesVersion:v1.21.3 ControlPlane:false Worker:true}
	I0811 00:54:28.973951 1439337 start.go:126] createHost starting for "m02" (driver="docker")
	I0811 00:54:28.976950 1439337 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0811 00:54:28.977094 1439337 start.go:160] libmachine.API.Create for "multinode-20210811005307-1387367" (driver="docker")
	I0811 00:54:28.977123 1439337 client.go:168] LocalClient.Create starting
	I0811 00:54:28.977191 1439337 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem
	I0811 00:54:28.977225 1439337 main.go:130] libmachine: Decoding PEM data...
	I0811 00:54:28.977245 1439337 main.go:130] libmachine: Parsing certificate...
	I0811 00:54:28.977358 1439337 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem
	I0811 00:54:28.977374 1439337 main.go:130] libmachine: Decoding PEM data...
	I0811 00:54:28.977386 1439337 main.go:130] libmachine: Parsing certificate...
	I0811 00:54:28.977674 1439337 cli_runner.go:115] Run: docker network inspect multinode-20210811005307-1387367 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0811 00:54:29.009692 1439337 network_create.go:67] Found existing network {name:multinode-20210811005307-1387367 subnet:0x40010b5050 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0811 00:54:29.009733 1439337 kic.go:106] calculated static IP "192.168.49.3" for the "multinode-20210811005307-1387367-m02" container
	I0811 00:54:29.009802 1439337 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0811 00:54:29.042081 1439337 cli_runner.go:115] Run: docker volume create multinode-20210811005307-1387367-m02 --label name.minikube.sigs.k8s.io=multinode-20210811005307-1387367-m02 --label created_by.minikube.sigs.k8s.io=true
	I0811 00:54:29.081155 1439337 oci.go:102] Successfully created a docker volume multinode-20210811005307-1387367-m02
	I0811 00:54:29.081242 1439337 cli_runner.go:115] Run: docker run --rm --name multinode-20210811005307-1387367-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-20210811005307-1387367-m02 --entrypoint /usr/bin/test -v multinode-20210811005307-1387367-m02:/var gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -d /var/lib
	I0811 00:54:29.693260 1439337 oci.go:106] Successfully prepared a docker volume multinode-20210811005307-1387367-m02
	W0811 00:54:29.693323 1439337 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0811 00:54:29.693334 1439337 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0811 00:54:29.693399 1439337 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0811 00:54:29.693609 1439337 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime docker
	I0811 00:54:29.693632 1439337 kic.go:179] Starting extracting preloaded images to volume ...
	I0811 00:54:29.693683 1439337 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v multinode-20210811005307-1387367-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir
	I0811 00:54:29.825794 1439337 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-20210811005307-1387367-m02 --name multinode-20210811005307-1387367-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-20210811005307-1387367-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-20210811005307-1387367-m02 --network multinode-20210811005307-1387367 --ip 192.168.49.3 --volume multinode-20210811005307-1387367-m02:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79
	I0811 00:54:30.404178 1439337 cli_runner.go:115] Run: docker container inspect multinode-20210811005307-1387367-m02 --format={{.State.Running}}
	I0811 00:54:30.460317 1439337 cli_runner.go:115] Run: docker container inspect multinode-20210811005307-1387367-m02 --format={{.State.Status}}
	I0811 00:54:30.513780 1439337 cli_runner.go:115] Run: docker exec multinode-20210811005307-1387367-m02 stat /var/lib/dpkg/alternatives/iptables
	I0811 00:54:30.602948 1439337 oci.go:278] the created container "multinode-20210811005307-1387367-m02" has a running status.
	I0811 00:54:30.602980 1439337 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210811005307-1387367-m02/id_rsa...
	I0811 00:54:30.860232 1439337 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210811005307-1387367-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0811 00:54:30.860275 1439337 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210811005307-1387367-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0811 00:54:31.009081 1439337 cli_runner.go:115] Run: docker container inspect multinode-20210811005307-1387367-m02 --format={{.State.Status}}
	I0811 00:54:31.073950 1439337 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0811 00:54:31.073977 1439337 kic_runner.go:115] Args: [docker exec --privileged multinode-20210811005307-1387367-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0811 00:54:39.886611 1439337 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v multinode-20210811005307-1387367-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir: (10.19289047s)
	I0811 00:54:39.886639 1439337 kic.go:188] duration metric: took 10.193004 seconds to extract preloaded images to volume
	I0811 00:54:39.886725 1439337 cli_runner.go:115] Run: docker container inspect multinode-20210811005307-1387367-m02 --format={{.State.Status}}
	I0811 00:54:39.918981 1439337 machine.go:88] provisioning docker machine ...
	I0811 00:54:39.919016 1439337 ubuntu.go:169] provisioning hostname "multinode-20210811005307-1387367-m02"
	I0811 00:54:39.919076 1439337 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210811005307-1387367-m02
	I0811 00:54:39.959754 1439337 main.go:130] libmachine: Using SSH client type: native
	I0811 00:54:39.959932 1439337 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil>  [] 0s} 127.0.0.1 50290 <nil> <nil>}
	I0811 00:54:39.959956 1439337 main.go:130] libmachine: About to run SSH command:
	sudo hostname multinode-20210811005307-1387367-m02 && echo "multinode-20210811005307-1387367-m02" | sudo tee /etc/hostname
	I0811 00:54:40.116900 1439337 main.go:130] libmachine: SSH cmd err, output: <nil>: multinode-20210811005307-1387367-m02
	
	I0811 00:54:40.116976 1439337 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210811005307-1387367-m02
	I0811 00:54:40.155470 1439337 main.go:130] libmachine: Using SSH client type: native
	I0811 00:54:40.155639 1439337 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil>  [] 0s} 127.0.0.1 50290 <nil> <nil>}
	I0811 00:54:40.155661 1439337 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-20210811005307-1387367-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-20210811005307-1387367-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-20210811005307-1387367-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0811 00:54:40.284661 1439337 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0811 00:54:40.284689 1439337 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/k
ey.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube}
	I0811 00:54:40.284706 1439337 ubuntu.go:177] setting up certificates
	I0811 00:54:40.284715 1439337 provision.go:83] configureAuth start
	I0811 00:54:40.284775 1439337 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210811005307-1387367-m02
	I0811 00:54:40.317683 1439337 provision.go:137] copyHostCerts
	I0811 00:54:40.317733 1439337 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem
	I0811 00:54:40.317764 1439337 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem, removing ...
	I0811 00:54:40.317777 1439337 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem
	I0811 00:54:40.317847 1439337 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem (1679 bytes)
	I0811 00:54:40.317922 1439337 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem
	I0811 00:54:40.317945 1439337 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem, removing ...
	I0811 00:54:40.317955 1439337 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem
	I0811 00:54:40.317977 1439337 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem (1082 bytes)
	I0811 00:54:40.318016 1439337 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem
	I0811 00:54:40.318036 1439337 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem, removing ...
	I0811 00:54:40.318046 1439337 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem
	I0811 00:54:40.318066 1439337 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem (1123 bytes)
	I0811 00:54:40.318111 1439337 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem org=jenkins.multinode-20210811005307-1387367-m02 san=[192.168.49.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-20210811005307-1387367-m02]
	I0811 00:54:40.826270 1439337 provision.go:171] copyRemoteCerts
	I0811 00:54:40.826343 1439337 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0811 00:54:40.826387 1439337 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210811005307-1387367-m02
	I0811 00:54:40.859966 1439337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50290 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210811005307-1387367-m02/id_rsa Username:docker}
	I0811 00:54:40.944597 1439337 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0811 00:54:40.944657 1439337 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0811 00:54:40.963217 1439337 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0811 00:54:40.963273 1439337 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem --> /etc/docker/server.pem (1281 bytes)
	I0811 00:54:40.979836 1439337 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0811 00:54:40.979891 1439337 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0811 00:54:40.996513 1439337 provision.go:86] duration metric: configureAuth took 711.780162ms
	I0811 00:54:40.996540 1439337 ubuntu.go:193] setting minikube options for container-runtime
	I0811 00:54:40.996759 1439337 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210811005307-1387367-m02
	I0811 00:54:41.034283 1439337 main.go:130] libmachine: Using SSH client type: native
	I0811 00:54:41.034451 1439337 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil>  [] 0s} 127.0.0.1 50290 <nil> <nil>}
	I0811 00:54:41.034467 1439337 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0811 00:54:41.148976 1439337 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0811 00:54:41.149058 1439337 ubuntu.go:71] root file system type: overlay
	I0811 00:54:41.149284 1439337 provision.go:308] Updating docker unit: /lib/systemd/system/docker.service ...
	I0811 00:54:41.149386 1439337 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210811005307-1387367-m02
	I0811 00:54:41.189427 1439337 main.go:130] libmachine: Using SSH client type: native
	I0811 00:54:41.189596 1439337 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil>  [] 0s} 127.0.0.1 50290 <nil> <nil>}
	I0811 00:54:41.189698 1439337 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.49.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0811 00:54:41.313409 1439337 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.49.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0811 00:54:41.313497 1439337 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210811005307-1387367-m02
	I0811 00:54:41.345873 1439337 main.go:130] libmachine: Using SSH client type: native
	I0811 00:54:41.346041 1439337 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil>  [] 0s} 127.0.0.1 50290 <nil> <nil>}
	I0811 00:54:41.346068 1439337 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0811 00:54:42.360236 1439337 main.go:130] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-06-02 11:55:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2021-08-11 00:54:41.308523488 +0000
	@@ -1,30 +1,33 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	+BindsTo=containerd.service
	 After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+Environment=NO_PROXY=192.168.49.2
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +35,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0811 00:54:42.360299 1439337 machine.go:91] provisioned docker machine in 2.441293199s
	I0811 00:54:42.360321 1439337 client.go:171] LocalClient.Create took 13.383192304s
	I0811 00:54:42.360341 1439337 start.go:168] duration metric: libmachine.API.Create for "multinode-20210811005307-1387367" took 13.383247409s
	I0811 00:54:42.360376 1439337 start.go:267] post-start starting for "multinode-20210811005307-1387367-m02" (driver="docker")
	I0811 00:54:42.360397 1439337 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0811 00:54:42.360477 1439337 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0811 00:54:42.360534 1439337 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210811005307-1387367-m02
	I0811 00:54:42.405936 1439337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50290 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210811005307-1387367-m02/id_rsa Username:docker}
	I0811 00:54:42.492512 1439337 ssh_runner.go:149] Run: cat /etc/os-release
	I0811 00:54:42.495129 1439337 command_runner.go:124] > NAME="Ubuntu"
	I0811 00:54:42.495150 1439337 command_runner.go:124] > VERSION="20.04.2 LTS (Focal Fossa)"
	I0811 00:54:42.495155 1439337 command_runner.go:124] > ID=ubuntu
	I0811 00:54:42.495161 1439337 command_runner.go:124] > ID_LIKE=debian
	I0811 00:54:42.495168 1439337 command_runner.go:124] > PRETTY_NAME="Ubuntu 20.04.2 LTS"
	I0811 00:54:42.495174 1439337 command_runner.go:124] > VERSION_ID="20.04"
	I0811 00:54:42.495182 1439337 command_runner.go:124] > HOME_URL="https://www.ubuntu.com/"
	I0811 00:54:42.495188 1439337 command_runner.go:124] > SUPPORT_URL="https://help.ubuntu.com/"
	I0811 00:54:42.495199 1439337 command_runner.go:124] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0811 00:54:42.495209 1439337 command_runner.go:124] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0811 00:54:42.495217 1439337 command_runner.go:124] > VERSION_CODENAME=focal
	I0811 00:54:42.495223 1439337 command_runner.go:124] > UBUNTU_CODENAME=focal
	I0811 00:54:42.495281 1439337 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0811 00:54:42.495300 1439337 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0811 00:54:42.495312 1439337 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0811 00:54:42.495322 1439337 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0811 00:54:42.495332 1439337 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/addons for local assets ...
	I0811 00:54:42.495387 1439337 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files for local assets ...
	I0811 00:54:42.495470 1439337 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/13873672.pem -> 13873672.pem in /etc/ssl/certs
	I0811 00:54:42.495483 1439337 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/13873672.pem -> /etc/ssl/certs/13873672.pem
	I0811 00:54:42.495574 1439337 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0811 00:54:42.502251 1439337 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/13873672.pem --> /etc/ssl/certs/13873672.pem (1708 bytes)
	I0811 00:54:42.519818 1439337 start.go:270] post-start completed in 159.413663ms
	I0811 00:54:42.520251 1439337 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210811005307-1387367-m02
	I0811 00:54:42.552517 1439337 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/config.json ...
	I0811 00:54:42.552766 1439337 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0811 00:54:42.552817 1439337 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210811005307-1387367-m02
	I0811 00:54:42.584493 1439337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50290 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210811005307-1387367-m02/id_rsa Username:docker}
	I0811 00:54:42.665087 1439337 command_runner.go:124] > 81%!
	(MISSING)I0811 00:54:42.665118 1439337 start.go:129] duration metric: createHost completed in 13.69115856s
	I0811 00:54:42.665127 1439337 start.go:80] releasing machines lock for "multinode-20210811005307-1387367-m02", held for 13.691288217s
	I0811 00:54:42.665209 1439337 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210811005307-1387367-m02
	I0811 00:54:42.701455 1439337 out.go:177] * Found network options:
	I0811 00:54:42.703731 1439337 out.go:177]   - NO_PROXY=192.168.49.2
	W0811 00:54:42.703771 1439337 proxy.go:118] fail to check proxy env: Error ip not in block
	W0811 00:54:42.703803 1439337 proxy.go:118] fail to check proxy env: Error ip not in block
	I0811 00:54:42.703935 1439337 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0811 00:54:42.703960 1439337 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0811 00:54:42.703982 1439337 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210811005307-1387367-m02
	I0811 00:54:42.704027 1439337 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210811005307-1387367-m02
	I0811 00:54:42.750435 1439337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50290 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210811005307-1387367-m02/id_rsa Username:docker}
	I0811 00:54:42.771505 1439337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50290 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210811005307-1387367-m02/id_rsa Username:docker}
	I0811 00:54:43.019136 1439337 command_runner.go:124] > <HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
	I0811 00:54:43.019202 1439337 command_runner.go:124] > <TITLE>302 Moved</TITLE></HEAD><BODY>
	I0811 00:54:43.019221 1439337 command_runner.go:124] > <H1>302 Moved</H1>
	I0811 00:54:43.019239 1439337 command_runner.go:124] > The document has moved
	I0811 00:54:43.019257 1439337 command_runner.go:124] > <A HREF="https://cloud.google.com/container-registry/">here</A>.
	I0811 00:54:43.019290 1439337 command_runner.go:124] > </BODY></HTML>
	I0811 00:54:43.024245 1439337 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0811 00:54:43.034341 1439337 command_runner.go:124] > # /lib/systemd/system/docker.service
	I0811 00:54:43.034361 1439337 command_runner.go:124] > [Unit]
	I0811 00:54:43.034370 1439337 command_runner.go:124] > Description=Docker Application Container Engine
	I0811 00:54:43.034377 1439337 command_runner.go:124] > Documentation=https://docs.docker.com
	I0811 00:54:43.034382 1439337 command_runner.go:124] > BindsTo=containerd.service
	I0811 00:54:43.034391 1439337 command_runner.go:124] > After=network-online.target firewalld.service containerd.service
	I0811 00:54:43.034401 1439337 command_runner.go:124] > Wants=network-online.target
	I0811 00:54:43.034407 1439337 command_runner.go:124] > Requires=docker.socket
	I0811 00:54:43.034415 1439337 command_runner.go:124] > StartLimitBurst=3
	I0811 00:54:43.034420 1439337 command_runner.go:124] > StartLimitIntervalSec=60
	I0811 00:54:43.034427 1439337 command_runner.go:124] > [Service]
	I0811 00:54:43.034432 1439337 command_runner.go:124] > Type=notify
	I0811 00:54:43.034446 1439337 command_runner.go:124] > Restart=on-failure
	I0811 00:54:43.034452 1439337 command_runner.go:124] > Environment=NO_PROXY=192.168.49.2
	I0811 00:54:43.034467 1439337 command_runner.go:124] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0811 00:54:43.034482 1439337 command_runner.go:124] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0811 00:54:43.034492 1439337 command_runner.go:124] > # here is to clear out that command inherited from the base configuration. Without this,
	I0811 00:54:43.034504 1439337 command_runner.go:124] > # the command from the base configuration and the command specified here are treated as
	I0811 00:54:43.034519 1439337 command_runner.go:124] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0811 00:54:43.034529 1439337 command_runner.go:124] > # will catch this invalid input and refuse to start the service with an error like:
	I0811 00:54:43.034543 1439337 command_runner.go:124] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0811 00:54:43.034561 1439337 command_runner.go:124] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0811 00:54:43.034575 1439337 command_runner.go:124] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0811 00:54:43.034579 1439337 command_runner.go:124] > ExecStart=
	I0811 00:54:43.034605 1439337 command_runner.go:124] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0811 00:54:43.034617 1439337 command_runner.go:124] > ExecReload=/bin/kill -s HUP $MAINPID
	I0811 00:54:43.034628 1439337 command_runner.go:124] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0811 00:54:43.034641 1439337 command_runner.go:124] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0811 00:54:43.034650 1439337 command_runner.go:124] > LimitNOFILE=infinity
	I0811 00:54:43.034658 1439337 command_runner.go:124] > LimitNPROC=infinity
	I0811 00:54:43.034663 1439337 command_runner.go:124] > LimitCORE=infinity
	I0811 00:54:43.034671 1439337 command_runner.go:124] > # Uncomment TasksMax if your systemd version supports it.
	I0811 00:54:43.034684 1439337 command_runner.go:124] > # Only systemd 226 and above support this version.
	I0811 00:54:43.034689 1439337 command_runner.go:124] > TasksMax=infinity
	I0811 00:54:43.034696 1439337 command_runner.go:124] > TimeoutStartSec=0
	I0811 00:54:43.034707 1439337 command_runner.go:124] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0811 00:54:43.034716 1439337 command_runner.go:124] > Delegate=yes
	I0811 00:54:43.034724 1439337 command_runner.go:124] > # kill only the docker process, not all processes in the cgroup
	I0811 00:54:43.034733 1439337 command_runner.go:124] > KillMode=process
	I0811 00:54:43.034743 1439337 command_runner.go:124] > [Install]
	I0811 00:54:43.034752 1439337 command_runner.go:124] > WantedBy=multi-user.target
	I0811 00:54:43.034764 1439337 cruntime.go:249] skipping containerd shutdown because we are bound to it
	I0811 00:54:43.034817 1439337 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0811 00:54:43.045479 1439337 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0811 00:54:43.064087 1439337 command_runner.go:124] > runtime-endpoint: unix:///var/run/dockershim.sock
	I0811 00:54:43.064109 1439337 command_runner.go:124] > image-endpoint: unix:///var/run/dockershim.sock
	I0811 00:54:43.065432 1439337 ssh_runner.go:149] Run: sudo systemctl unmask docker.service
	I0811 00:54:43.149282 1439337 ssh_runner.go:149] Run: sudo systemctl enable docker.socket
	I0811 00:54:43.238390 1439337 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0811 00:54:43.246922 1439337 command_runner.go:124] > # /lib/systemd/system/docker.service
	I0811 00:54:43.247684 1439337 command_runner.go:124] > [Unit]
	I0811 00:54:43.247718 1439337 command_runner.go:124] > Description=Docker Application Container Engine
	I0811 00:54:43.247738 1439337 command_runner.go:124] > Documentation=https://docs.docker.com
	I0811 00:54:43.247779 1439337 command_runner.go:124] > BindsTo=containerd.service
	I0811 00:54:43.247797 1439337 command_runner.go:124] > After=network-online.target firewalld.service containerd.service
	I0811 00:54:43.247803 1439337 command_runner.go:124] > Wants=network-online.target
	I0811 00:54:43.247812 1439337 command_runner.go:124] > Requires=docker.socket
	I0811 00:54:43.247825 1439337 command_runner.go:124] > StartLimitBurst=3
	I0811 00:54:43.247836 1439337 command_runner.go:124] > StartLimitIntervalSec=60
	I0811 00:54:43.247841 1439337 command_runner.go:124] > [Service]
	I0811 00:54:43.247851 1439337 command_runner.go:124] > Type=notify
	I0811 00:54:43.247856 1439337 command_runner.go:124] > Restart=on-failure
	I0811 00:54:43.247862 1439337 command_runner.go:124] > Environment=NO_PROXY=192.168.49.2
	I0811 00:54:43.247872 1439337 command_runner.go:124] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0811 00:54:43.247891 1439337 command_runner.go:124] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0811 00:54:43.247909 1439337 command_runner.go:124] > # here is to clear out that command inherited from the base configuration. Without this,
	I0811 00:54:43.247920 1439337 command_runner.go:124] > # the command from the base configuration and the command specified here are treated as
	I0811 00:54:43.247930 1439337 command_runner.go:124] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0811 00:54:43.247944 1439337 command_runner.go:124] > # will catch this invalid input and refuse to start the service with an error like:
	I0811 00:54:43.247954 1439337 command_runner.go:124] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0811 00:54:43.247967 1439337 command_runner.go:124] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0811 00:54:43.247982 1439337 command_runner.go:124] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0811 00:54:43.247986 1439337 command_runner.go:124] > ExecStart=
	I0811 00:54:43.248015 1439337 command_runner.go:124] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0811 00:54:43.248026 1439337 command_runner.go:124] > ExecReload=/bin/kill -s HUP $MAINPID
	I0811 00:54:43.248037 1439337 command_runner.go:124] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0811 00:54:43.248049 1439337 command_runner.go:124] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0811 00:54:43.248057 1439337 command_runner.go:124] > LimitNOFILE=infinity
	I0811 00:54:43.248069 1439337 command_runner.go:124] > LimitNPROC=infinity
	I0811 00:54:43.248074 1439337 command_runner.go:124] > LimitCORE=infinity
	I0811 00:54:43.248082 1439337 command_runner.go:124] > # Uncomment TasksMax if your systemd version supports it.
	I0811 00:54:43.248094 1439337 command_runner.go:124] > # Only systemd 226 and above support this version.
	I0811 00:54:43.248102 1439337 command_runner.go:124] > TasksMax=infinity
	I0811 00:54:43.248107 1439337 command_runner.go:124] > TimeoutStartSec=0
	I0811 00:54:43.248120 1439337 command_runner.go:124] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0811 00:54:43.248125 1439337 command_runner.go:124] > Delegate=yes
	I0811 00:54:43.248133 1439337 command_runner.go:124] > # kill only the docker process, not all processes in the cgroup
	I0811 00:54:43.248145 1439337 command_runner.go:124] > KillMode=process
	I0811 00:54:43.248151 1439337 command_runner.go:124] > [Install]
	I0811 00:54:43.248157 1439337 command_runner.go:124] > WantedBy=multi-user.target
	I0811 00:54:43.249339 1439337 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0811 00:54:43.342921 1439337 ssh_runner.go:149] Run: sudo systemctl start docker
	I0811 00:54:43.352798 1439337 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0811 00:54:43.414498 1439337 command_runner.go:124] > 20.10.7
	I0811 00:54:43.417618 1439337 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0811 00:54:43.468394 1439337 command_runner.go:124] > 20.10.7
	I0811 00:54:43.475858 1439337 out.go:204] * Preparing Kubernetes v1.21.3 on Docker 20.10.7 ...
	I0811 00:54:43.477955 1439337 out.go:177]   - env NO_PROXY=192.168.49.2
	I0811 00:54:43.478028 1439337 cli_runner.go:115] Run: docker network inspect multinode-20210811005307-1387367 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0811 00:54:43.509881 1439337 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0811 00:54:43.513509 1439337 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0811 00:54:43.524526 1439337 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367 for IP: 192.168.49.3
	I0811 00:54:43.524579 1439337 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key
	I0811 00:54:43.524598 1439337 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key
	I0811 00:54:43.524612 1439337 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0811 00:54:43.524625 1439337 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0811 00:54:43.524662 1439337 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0811 00:54:43.524675 1439337 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0811 00:54:43.524727 1439337 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/1387367.pem (1338 bytes)
	W0811 00:54:43.524775 1439337 certs.go:369] ignoring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/1387367_empty.pem, impossibly tiny 0 bytes
	I0811 00:54:43.524791 1439337 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem (1675 bytes)
	I0811 00:54:43.524815 1439337 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem (1082 bytes)
	I0811 00:54:43.524844 1439337 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem (1123 bytes)
	I0811 00:54:43.524870 1439337 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem (1679 bytes)
	I0811 00:54:43.524919 1439337 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/13873672.pem (1708 bytes)
	I0811 00:54:43.524955 1439337 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/13873672.pem -> /usr/share/ca-certificates/13873672.pem
	I0811 00:54:43.524971 1439337 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0811 00:54:43.524987 1439337 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/1387367.pem -> /usr/share/ca-certificates/1387367.pem
	I0811 00:54:43.525398 1439337 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0811 00:54:43.544023 1439337 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0811 00:54:43.560663 1439337 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0811 00:54:43.577005 1439337 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0811 00:54:43.593430 1439337 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/13873672.pem --> /usr/share/ca-certificates/13873672.pem (1708 bytes)
	I0811 00:54:43.609865 1439337 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0811 00:54:43.626164 1439337 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/1387367.pem --> /usr/share/ca-certificates/1387367.pem (1338 bytes)
	I0811 00:54:43.642676 1439337 ssh_runner.go:149] Run: openssl version
	I0811 00:54:43.647038 1439337 command_runner.go:124] > OpenSSL 1.1.1f  31 Mar 2020
	I0811 00:54:43.647406 1439337 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0811 00:54:43.654398 1439337 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0811 00:54:43.656948 1439337 command_runner.go:124] > -rw-r--r-- 1 root root 1111 Aug 11 00:30 /usr/share/ca-certificates/minikubeCA.pem
	I0811 00:54:43.657286 1439337 certs.go:416] hashing: -rw-r--r-- 1 root root 1111 Aug 11 00:30 /usr/share/ca-certificates/minikubeCA.pem
	I0811 00:54:43.657333 1439337 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0811 00:54:43.661954 1439337 command_runner.go:124] > b5213941
	I0811 00:54:43.662017 1439337 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0811 00:54:43.668789 1439337 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1387367.pem && ln -fs /usr/share/ca-certificates/1387367.pem /etc/ssl/certs/1387367.pem"
	I0811 00:54:43.675944 1439337 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1387367.pem
	I0811 00:54:43.678752 1439337 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Aug 11 00:46 /usr/share/ca-certificates/1387367.pem
	I0811 00:54:43.678990 1439337 certs.go:416] hashing: -rw-r--r-- 1 root root 1338 Aug 11 00:46 /usr/share/ca-certificates/1387367.pem
	I0811 00:54:43.679040 1439337 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1387367.pem
	I0811 00:54:43.683512 1439337 command_runner.go:124] > 51391683
	I0811 00:54:43.683852 1439337 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1387367.pem /etc/ssl/certs/51391683.0"
	I0811 00:54:43.691161 1439337 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13873672.pem && ln -fs /usr/share/ca-certificates/13873672.pem /etc/ssl/certs/13873672.pem"
	I0811 00:54:43.697979 1439337 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/13873672.pem
	I0811 00:54:43.700576 1439337 command_runner.go:124] > -rw-r--r-- 1 root root 1708 Aug 11 00:46 /usr/share/ca-certificates/13873672.pem
	I0811 00:54:43.700866 1439337 certs.go:416] hashing: -rw-r--r-- 1 root root 1708 Aug 11 00:46 /usr/share/ca-certificates/13873672.pem
	I0811 00:54:43.700918 1439337 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13873672.pem
	I0811 00:54:43.705434 1439337 command_runner.go:124] > 3ec20f2e
	I0811 00:54:43.705777 1439337 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/13873672.pem /etc/ssl/certs/3ec20f2e.0"
	I0811 00:54:43.712454 1439337 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}}
	I0811 00:54:43.832028 1439337 command_runner.go:124] > cgroupfs
	I0811 00:54:43.835534 1439337 cni.go:93] Creating CNI manager for ""
	I0811 00:54:43.835579 1439337 cni.go:154] 2 nodes found, recommending kindnet
	I0811 00:54:43.835604 1439337 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0811 00:54:43.835627 1439337 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.3 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-20210811005307-1387367 NodeName:multinode-20210811005307-1387367-m02 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.3 CgroupDriver:cgroupfs ClientCA
File:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0811 00:54:43.835770 1439337 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "multinode-20210811005307-1387367-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0811 00:54:43.835865 1439337 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=multinode-20210811005307-1387367-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:multinode-20210811005307-1387367 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0811 00:54:43.835935 1439337 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0811 00:54:43.841837 1439337 command_runner.go:124] > kubeadm
	I0811 00:54:43.841879 1439337 command_runner.go:124] > kubectl
	I0811 00:54:43.841897 1439337 command_runner.go:124] > kubelet
	I0811 00:54:43.842792 1439337 binaries.go:44] Found k8s binaries, skipping transfer
	I0811 00:54:43.842856 1439337 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0811 00:54:43.850102 1439337 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (414 bytes)
	I0811 00:54:43.862601 1439337 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0811 00:54:43.874704 1439337 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0811 00:54:43.877535 1439337 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0811 00:54:43.885806 1439337 host.go:66] Checking if "multinode-20210811005307-1387367" exists ...
	I0811 00:54:43.886293 1439337 start.go:241] JoinCluster: &{Name:multinode-20210811005307-1387367 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:multinode-20210811005307-1387367 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServer
IPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:0 KubernetesVersion:v1.21.3 ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0}
	I0811 00:54:43.886377 1439337 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm token create --print-join-command --ttl=0"
	I0811 00:54:43.886423 1439337 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210811005307-1387367
	I0811 00:54:43.919179 1439337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50285 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210811005307-1387367/id_rsa Username:docker}
	I0811 00:54:44.081445 1439337 command_runner.go:124] > kubeadm join control-plane.minikube.internal:8443 --token d5zr0h.q1pm3uca3ghnt70i --discovery-token-ca-cert-hash sha256:de7b801124e562bd66867fe5271994d6be7651a35fa31dfce01acdef2a9271b2 
	I0811 00:54:44.086975 1439337 start.go:262] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.49.3 Port:0 KubernetesVersion:v1.21.3 ControlPlane:false Worker:true}
	I0811 00:54:44.087014 1439337 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm join control-plane.minikube.internal:8443 --token d5zr0h.q1pm3uca3ghnt70i --discovery-token-ca-cert-hash sha256:de7b801124e562bd66867fe5271994d6be7651a35fa31dfce01acdef2a9271b2 --ignore-preflight-errors=all --cri-socket /var/run/dockershim.sock --node-name=multinode-20210811005307-1387367-m02"
	I0811 00:54:44.143277 1439337 command_runner.go:124] > [preflight] Running pre-flight checks
	I0811 00:54:44.451763 1439337 command_runner.go:124] > [preflight] The system verification failed. Printing the output from the verification:
	I0811 00:54:44.451784 1439337 command_runner.go:124] > KERNEL_VERSION: 5.8.0-1041-aws
	I0811 00:54:44.451792 1439337 command_runner.go:124] > DOCKER_VERSION: 20.10.7
	I0811 00:54:44.451800 1439337 command_runner.go:124] > DOCKER_GRAPH_DRIVER: overlay2
	I0811 00:54:44.451806 1439337 command_runner.go:124] > OS: Linux
	I0811 00:54:44.451813 1439337 command_runner.go:124] > CGROUPS_CPU: enabled
	I0811 00:54:44.451820 1439337 command_runner.go:124] > CGROUPS_CPUACCT: enabled
	I0811 00:54:44.451826 1439337 command_runner.go:124] > CGROUPS_CPUSET: enabled
	I0811 00:54:44.451833 1439337 command_runner.go:124] > CGROUPS_DEVICES: enabled
	I0811 00:54:44.451846 1439337 command_runner.go:124] > CGROUPS_FREEZER: enabled
	I0811 00:54:44.451852 1439337 command_runner.go:124] > CGROUPS_MEMORY: enabled
	I0811 00:54:44.451859 1439337 command_runner.go:124] > CGROUPS_PIDS: enabled
	I0811 00:54:44.451866 1439337 command_runner.go:124] > CGROUPS_HUGETLB: enabled
	I0811 00:54:44.611961 1439337 command_runner.go:124] > [preflight] Reading configuration from the cluster...
	I0811 00:54:44.611993 1439337 command_runner.go:124] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0811 00:54:44.644269 1439337 command_runner.go:124] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0811 00:54:44.644549 1439337 command_runner.go:124] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0811 00:54:44.644570 1439337 command_runner.go:124] > [kubelet-start] Starting the kubelet
	I0811 00:54:44.745915 1439337 command_runner.go:124] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0811 00:54:50.310561 1439337 command_runner.go:124] > This node has joined the cluster:
	I0811 00:54:50.310586 1439337 command_runner.go:124] > * Certificate signing request was sent to apiserver and a response was received.
	I0811 00:54:50.310595 1439337 command_runner.go:124] > * The Kubelet was informed of the new secure connection details.
	I0811 00:54:50.310604 1439337 command_runner.go:124] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0811 00:54:50.313793 1439337 command_runner.go:124] ! 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0811 00:54:50.313824 1439337 command_runner.go:124] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.8.0-1041-aws\n", err: exit status 1
	I0811 00:54:50.313836 1439337 command_runner.go:124] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0811 00:54:50.313856 1439337 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm join control-plane.minikube.internal:8443 --token d5zr0h.q1pm3uca3ghnt70i --discovery-token-ca-cert-hash sha256:de7b801124e562bd66867fe5271994d6be7651a35fa31dfce01acdef2a9271b2 --ignore-preflight-errors=all --cri-socket /var/run/dockershim.sock --node-name=multinode-20210811005307-1387367-m02": (6.226830739s)
	I0811 00:54:50.313871 1439337 ssh_runner.go:149] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0811 00:54:50.408968 1439337 command_runner.go:124] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I0811 00:54:50.497995 1439337 start.go:243] JoinCluster complete in 6.611697866s
	I0811 00:54:50.498018 1439337 cni.go:93] Creating CNI manager for ""
	I0811 00:54:50.498025 1439337 cni.go:154] 2 nodes found, recommending kindnet
	I0811 00:54:50.498081 1439337 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0811 00:54:50.502538 1439337 command_runner.go:124] >   File: /opt/cni/bin/portmap
	I0811 00:54:50.502566 1439337 command_runner.go:124] >   Size: 2603192   	Blocks: 5088       IO Block: 4096   regular file
	I0811 00:54:50.502575 1439337 command_runner.go:124] > Device: 3fh/63d	Inode: 2356928     Links: 1
	I0811 00:54:50.502583 1439337 command_runner.go:124] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0811 00:54:50.502590 1439337 command_runner.go:124] > Access: 2021-02-10 15:18:15.000000000 +0000
	I0811 00:54:50.502597 1439337 command_runner.go:124] > Modify: 2021-02-10 15:18:15.000000000 +0000
	I0811 00:54:50.502604 1439337 command_runner.go:124] > Change: 2021-07-02 14:49:52.887930340 +0000
	I0811 00:54:50.502608 1439337 command_runner.go:124] >  Birth: -
	I0811 00:54:50.502884 1439337 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0811 00:54:50.502898 1439337 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0811 00:54:50.515400 1439337 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0811 00:54:50.745633 1439337 command_runner.go:124] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0811 00:54:50.748145 1439337 command_runner.go:124] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0811 00:54:50.750666 1439337 command_runner.go:124] > serviceaccount/kindnet unchanged
	I0811 00:54:50.762393 1439337 command_runner.go:124] > daemonset.apps/kindnet configured
	I0811 00:54:50.770246 1439337 start.go:226] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:0 KubernetesVersion:v1.21.3 ControlPlane:false Worker:true}
	I0811 00:54:50.772936 1439337 out.go:177] * Verifying Kubernetes components...
	I0811 00:54:50.773050 1439337 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0811 00:54:50.783888 1439337 loader.go:372] Config loaded from file:  /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	I0811 00:54:50.784194 1439337 kapi.go:59] client config for multinode-20210811005307-1387367: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-20210811005307-1387367/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/multinode-202
10811005307-1387367/client.key", CAFile:"/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1115760), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0811 00:54:50.785499 1439337 node_ready.go:35] waiting up to 6m0s for node "multinode-20210811005307-1387367-m02" to be "Ready" ...
	I0811 00:54:50.785578 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367-m02
	I0811 00:54:50.785589 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:50.785595 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:50.785604 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:50.787598 1439337 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0811 00:54:50.787618 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:50.787623 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:50.787627 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:50.787631 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:50 GMT
	I0811 00:54:50.787634 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:50.787638 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:50.787760 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367-m02","uid":"112846e3-2407-4121-9cea-7dab53ea41fd","resourceVersion":"568","creationTimestamp":"2021-08-11T00:54:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":
"2021-08-11T00:54:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata" [truncated 4364 chars]
	I0811 00:54:51.288736 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367-m02
	I0811 00:54:51.288761 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:51.288767 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:51.288772 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:51.291186 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:51.291202 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:51.291207 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:51.291210 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:51 GMT
	I0811 00:54:51.291214 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:51.291218 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:51.291221 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:51.291704 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367-m02","uid":"112846e3-2407-4121-9cea-7dab53ea41fd","resourceVersion":"568","creationTimestamp":"2021-08-11T00:54:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":
"2021-08-11T00:54:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata" [truncated 4364 chars]
	I0811 00:54:51.788205 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367-m02
	I0811 00:54:51.788228 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:51.788235 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:51.788240 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:51.804491 1439337 round_trippers.go:457] Response Status: 200 OK in 16 milliseconds
	I0811 00:54:51.804511 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:51.804517 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:51.804521 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:51.804525 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:51.804528 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:51.804532 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:51 GMT
	I0811 00:54:51.805979 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367-m02","uid":"112846e3-2407-4121-9cea-7dab53ea41fd","resourceVersion":"568","creationTimestamp":"2021-08-11T00:54:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":
"2021-08-11T00:54:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata" [truncated 4364 chars]
	I0811 00:54:52.288821 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367-m02
	I0811 00:54:52.288853 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:52.288860 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:52.288865 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:52.291107 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:52.291125 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:52.291130 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:52.291134 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:52.291140 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:52 GMT
	I0811 00:54:52.291143 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:52.291146 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:52.291300 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367-m02","uid":"112846e3-2407-4121-9cea-7dab53ea41fd","resourceVersion":"568","creationTimestamp":"2021-08-11T00:54:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":
"2021-08-11T00:54:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata" [truncated 4364 chars]
	I0811 00:54:52.788388 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367-m02
	I0811 00:54:52.788416 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:52.788423 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:52.788428 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:52.799716 1439337 round_trippers.go:457] Response Status: 200 OK in 11 milliseconds
	I0811 00:54:52.799742 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:52.799748 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:52.799752 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:52.799756 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:52 GMT
	I0811 00:54:52.799760 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:52.799764 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:52.800216 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367-m02","uid":"112846e3-2407-4121-9cea-7dab53ea41fd","resourceVersion":"568","creationTimestamp":"2021-08-11T00:54:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":
"2021-08-11T00:54:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata" [truncated 4364 chars]
	I0811 00:54:52.800472 1439337 node_ready.go:58] node "multinode-20210811005307-1387367-m02" has status "Ready":"False"
	I0811 00:54:53.288398 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367-m02
	I0811 00:54:53.288420 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:53.288427 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:53.288431 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:53.290655 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:53.290670 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:53.290675 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:53.290679 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:53.290682 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:53.290686 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:53 GMT
	I0811 00:54:53.290690 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:53.290861 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367-m02","uid":"112846e3-2407-4121-9cea-7dab53ea41fd","resourceVersion":"568","creationTimestamp":"2021-08-11T00:54:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":
"2021-08-11T00:54:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata" [truncated 4364 chars]
	I0811 00:54:53.788408 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367-m02
	I0811 00:54:53.788429 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:53.788435 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:53.788439 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:53.790943 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:53.790959 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:53.790964 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:53.790968 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:53.790971 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:53.790975 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:53.790978 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:53 GMT
	I0811 00:54:53.791439 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367-m02","uid":"112846e3-2407-4121-9cea-7dab53ea41fd","resourceVersion":"568","creationTimestamp":"2021-08-11T00:54:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":
"2021-08-11T00:54:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata" [truncated 4364 chars]
	I0811 00:54:54.288347 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367-m02
	I0811 00:54:54.288372 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:54.288378 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:54.288383 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:54.290534 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:54.290550 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:54.290555 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:54.290559 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:54.290563 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:54.290567 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:54.290570 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:54 GMT
	I0811 00:54:54.290671 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367-m02","uid":"112846e3-2407-4121-9cea-7dab53ea41fd","resourceVersion":"568","creationTimestamp":"2021-08-11T00:54:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":
"2021-08-11T00:54:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata" [truncated 4364 chars]
	I0811 00:54:54.788225 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367-m02
	I0811 00:54:54.788246 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:54.788252 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:54.788257 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:54.790654 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:54.790671 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:54.790676 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:54.790680 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:54.790684 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:54.790688 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:54.790692 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:54 GMT
	I0811 00:54:54.790800 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367-m02","uid":"112846e3-2407-4121-9cea-7dab53ea41fd","resourceVersion":"582","creationTimestamp":"2021-08-11T00:54:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/host
name":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephe [truncated 4473 chars]
	I0811 00:54:55.288569 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367-m02
	I0811 00:54:55.288592 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:55.288598 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:55.288603 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:55.294389 1439337 round_trippers.go:457] Response Status: 200 OK in 5 milliseconds
	I0811 00:54:55.294409 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:55.294414 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:55.294418 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:55.294422 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:55.294426 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:55.294430 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:55 GMT
	I0811 00:54:55.294547 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367-m02","uid":"112846e3-2407-4121-9cea-7dab53ea41fd","resourceVersion":"582","creationTimestamp":"2021-08-11T00:54:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/host
name":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephe [truncated 4473 chars]
	I0811 00:54:55.294808 1439337 node_ready.go:58] node "multinode-20210811005307-1387367-m02" has status "Ready":"False"
	I0811 00:54:55.788183 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367-m02
	I0811 00:54:55.788203 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:55.788209 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:55.788214 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:55.790252 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:55.790269 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:55.790274 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:55 GMT
	I0811 00:54:55.790279 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:55.790282 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:55.790285 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:55.790289 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:55.790400 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367-m02","uid":"112846e3-2407-4121-9cea-7dab53ea41fd","resourceVersion":"582","creationTimestamp":"2021-08-11T00:54:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/host
name":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephe [truncated 4473 chars]
	I0811 00:54:56.288311 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367-m02
	I0811 00:54:56.288341 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:56.288347 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:56.288352 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:56.290684 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:56.290701 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:56.290705 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:56.290709 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:56.290713 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:56.290716 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:56.290720 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:56 GMT
	I0811 00:54:56.290857 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367-m02","uid":"112846e3-2407-4121-9cea-7dab53ea41fd","resourceVersion":"582","creationTimestamp":"2021-08-11T00:54:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/host
name":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephe [truncated 4473 chars]
	I0811 00:54:56.788204 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367-m02
	I0811 00:54:56.788231 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:56.788237 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:56.788242 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:56.791372 1439337 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0811 00:54:56.791395 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:56.791401 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:56.791408 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:56 GMT
	I0811 00:54:56.791413 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:56.791417 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:56.791420 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:56.791547 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367-m02","uid":"112846e3-2407-4121-9cea-7dab53ea41fd","resourceVersion":"582","creationTimestamp":"2021-08-11T00:54:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/host
name":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephe [truncated 4473 chars]
	I0811 00:54:57.288521 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367-m02
	I0811 00:54:57.288565 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:57.288572 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:57.288577 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:57.290841 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:57.290859 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:57.290864 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:57.290868 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:57.290872 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:57.290875 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:57.290879 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:57 GMT
	I0811 00:54:57.291017 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367-m02","uid":"112846e3-2407-4121-9cea-7dab53ea41fd","resourceVersion":"582","creationTimestamp":"2021-08-11T00:54:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/host
name":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephe [truncated 4473 chars]
	I0811 00:54:57.788232 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367-m02
	I0811 00:54:57.788261 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:57.788267 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:57.788272 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:57.791325 1439337 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0811 00:54:57.791344 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:57.791351 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:57.791355 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:57.791359 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:57.791363 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:57.791368 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:57 GMT
	I0811 00:54:57.791547 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367-m02","uid":"112846e3-2407-4121-9cea-7dab53ea41fd","resourceVersion":"582","creationTimestamp":"2021-08-11T00:54:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/host
name":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephe [truncated 4473 chars]
	I0811 00:54:57.791821 1439337 node_ready.go:58] node "multinode-20210811005307-1387367-m02" has status "Ready":"False"
	I0811 00:54:58.288182 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367-m02
	I0811 00:54:58.288209 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:58.288218 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:58.288222 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:58.290540 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:58.290564 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:58.290569 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:58.290573 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:58 GMT
	I0811 00:54:58.290577 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:58.290581 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:58.290585 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:58.290771 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367-m02","uid":"112846e3-2407-4121-9cea-7dab53ea41fd","resourceVersion":"582","creationTimestamp":"2021-08-11T00:54:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/host
name":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephe [truncated 4473 chars]
	I0811 00:54:58.788724 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367-m02
	I0811 00:54:58.788754 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:58.788760 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:58.788765 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:58.791510 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:58.791534 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:58.791539 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:58.791543 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:58 GMT
	I0811 00:54:58.791546 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:58.791550 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:58.791553 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:58.791686 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367-m02","uid":"112846e3-2407-4121-9cea-7dab53ea41fd","resourceVersion":"582","creationTimestamp":"2021-08-11T00:54:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/host
name":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephe [truncated 4473 chars]
	I0811 00:54:59.288590 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367-m02
	I0811 00:54:59.288618 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:59.288625 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:59.288630 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:59.291031 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:59.291053 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:59.291058 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:59.291062 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:59.291065 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:59.291071 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:59.291075 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:59 GMT
	I0811 00:54:59.291248 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367-m02","uid":"112846e3-2407-4121-9cea-7dab53ea41fd","resourceVersion":"582","creationTimestamp":"2021-08-11T00:54:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/host
name":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephe [truncated 4473 chars]
	I0811 00:54:59.789056 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367-m02
	I0811 00:54:59.789079 1439337 round_trippers.go:438] Request Headers:
	I0811 00:54:59.789085 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:54:59.789090 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:54:59.791574 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:54:59.791592 1439337 round_trippers.go:460] Response Headers:
	I0811 00:54:59.791597 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:54:59.791601 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:54:59.791604 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:54:59.791608 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:54:59.791612 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:54:59 GMT
	I0811 00:54:59.791731 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367-m02","uid":"112846e3-2407-4121-9cea-7dab53ea41fd","resourceVersion":"582","creationTimestamp":"2021-08-11T00:54:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/host
name":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephe [truncated 4473 chars]
	I0811 00:54:59.791978 1439337 node_ready.go:58] node "multinode-20210811005307-1387367-m02" has status "Ready":"False"
	I0811 00:55:00.288225 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367-m02
	I0811 00:55:00.288252 1439337 round_trippers.go:438] Request Headers:
	I0811 00:55:00.288259 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:55:00.288264 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:55:00.290393 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:55:00.290409 1439337 round_trippers.go:460] Response Headers:
	I0811 00:55:00.290414 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:55:00.290418 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:55:00.290422 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:55:00.290426 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:55:00.290429 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:55:00 GMT
	I0811 00:55:00.290538 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367-m02","uid":"112846e3-2407-4121-9cea-7dab53ea41fd","resourceVersion":"595","creationTimestamp":"2021-08-11T00:54:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:m
etadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spe [truncated 4508 chars]
	I0811 00:55:00.290782 1439337 node_ready.go:49] node "multinode-20210811005307-1387367-m02" has status "Ready":"True"
	I0811 00:55:00.290790 1439337 node_ready.go:38] duration metric: took 9.50526433s waiting for node "multinode-20210811005307-1387367-m02" to be "Ready" ...
	I0811 00:55:00.290800 1439337 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0811 00:55:00.290863 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0811 00:55:00.290868 1439337 round_trippers.go:438] Request Headers:
	I0811 00:55:00.290875 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:55:00.290879 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:55:00.294373 1439337 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0811 00:55:00.294579 1439337 round_trippers.go:460] Response Headers:
	I0811 00:55:00.294606 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:55:00.294633 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:55:00.294650 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:55:00.294664 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:55:00.294692 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:55:00 GMT
	I0811 00:55:00.295211 1439337 request.go:1123] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"595"},"items":[{"metadata":{"name":"coredns-558bd4d5db-lpxc6","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"839d8a5e-9cef-4c9e-a07f-db7f529aaa6a","resourceVersion":"518","creationTimestamp":"2021-08-11T00:54:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"d2f4d58c-c53b-4251-9422-50d7bb166f11","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d2f4d58c-c53b-4251-9422-50d7bb166f11\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller"
:{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k: [truncated 69154 chars]
	I0811 00:55:00.299413 1439337 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-lpxc6" in "kube-system" namespace to be "Ready" ...
	I0811 00:55:00.299758 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-lpxc6
	I0811 00:55:00.299893 1439337 round_trippers.go:438] Request Headers:
	I0811 00:55:00.299920 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:55:00.299926 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:55:00.306045 1439337 round_trippers.go:457] Response Status: 200 OK in 6 milliseconds
	I0811 00:55:00.306067 1439337 round_trippers.go:460] Response Headers:
	I0811 00:55:00.306072 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:55:00.306076 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:55:00 GMT
	I0811 00:55:00.306080 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:55:00.306083 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:55:00.306087 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:55:00.306222 1439337 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-lpxc6","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"839d8a5e-9cef-4c9e-a07f-db7f529aaa6a","resourceVersion":"518","creationTimestamp":"2021-08-11T00:54:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"d2f4d58c-c53b-4251-9422-50d7bb166f11","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d2f4d58c-c53b-4251-9422-50d7bb166f11\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 6071 chars]
	I0811 00:55:00.306592 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:55:00.306610 1439337 round_trippers.go:438] Request Headers:
	I0811 00:55:00.306615 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:55:00.306620 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:55:00.308138 1439337 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0811 00:55:00.308163 1439337 round_trippers.go:460] Response Headers:
	I0811 00:55:00.308168 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:55:00.308173 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:55:00.308176 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:55:00 GMT
	I0811 00:55:00.308180 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:55:00.308196 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:55:00.308310 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"498","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5272 chars]
	I0811 00:55:00.308598 1439337 pod_ready.go:92] pod "coredns-558bd4d5db-lpxc6" in "kube-system" namespace has status "Ready":"True"
	I0811 00:55:00.308619 1439337 pod_ready.go:81] duration metric: took 9.154807ms waiting for pod "coredns-558bd4d5db-lpxc6" in "kube-system" namespace to be "Ready" ...
	I0811 00:55:00.308643 1439337 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-20210811005307-1387367" in "kube-system" namespace to be "Ready" ...
	I0811 00:55:00.308711 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210811005307-1387367
	I0811 00:55:00.308722 1439337 round_trippers.go:438] Request Headers:
	I0811 00:55:00.308727 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:55:00.308740 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:55:00.310463 1439337 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0811 00:55:00.310483 1439337 round_trippers.go:460] Response Headers:
	I0811 00:55:00.310488 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:55:00.310492 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:55:00.310495 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:55:00.310499 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:55:00.310520 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:55:00 GMT
	I0811 00:55:00.310631 1439337 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210811005307-1387367","namespace":"kube-system","uid":"b98555c3-d9ce-452c-a2de-7ee50a50311d","resourceVersion":"459","creationTimestamp":"2021-08-11T00:53:51Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"70ae736662f600440da0a55cde86b0f8","kubernetes.io/config.mirror":"70ae736662f600440da0a55cde86b0f8","kubernetes.io/config.seen":"2021-08-11T00:53:47.643869676Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm
.kubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.h [truncated 5588 chars]
	I0811 00:55:00.310940 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:55:00.310956 1439337 round_trippers.go:438] Request Headers:
	I0811 00:55:00.310962 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:55:00.310967 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:55:00.312442 1439337 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0811 00:55:00.312462 1439337 round_trippers.go:460] Response Headers:
	I0811 00:55:00.312467 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:55:00.312471 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:55:00.312474 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:55:00.312478 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:55:00 GMT
	I0811 00:55:00.312509 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:55:00.312613 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"498","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5272 chars]
	I0811 00:55:00.312873 1439337 pod_ready.go:92] pod "etcd-multinode-20210811005307-1387367" in "kube-system" namespace has status "Ready":"True"
	I0811 00:55:00.312888 1439337 pod_ready.go:81] duration metric: took 4.233467ms waiting for pod "etcd-multinode-20210811005307-1387367" in "kube-system" namespace to be "Ready" ...
	I0811 00:55:00.312903 1439337 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-20210811005307-1387367" in "kube-system" namespace to be "Ready" ...
	I0811 00:55:00.312950 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20210811005307-1387367
	I0811 00:55:00.312961 1439337 round_trippers.go:438] Request Headers:
	I0811 00:55:00.312966 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:55:00.312971 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:55:00.314738 1439337 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0811 00:55:00.314754 1439337 round_trippers.go:460] Response Headers:
	I0811 00:55:00.314758 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:55:00.314762 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:55:00.314765 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:55:00.314770 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:55:00.314775 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:55:00 GMT
	I0811 00:55:00.314882 1439337 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20210811005307-1387367","namespace":"kube-system","uid":"520b1e32-479d-4e0e-8867-276c958ae125","resourceVersion":"460","creationTimestamp":"2021-08-11T00:53:45Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8443","kubernetes.io/config.hash":"74969952953b6d01bc2817560a3e688d","kubernetes.io/config.mirror":"74969952953b6d01bc2817560a3e688d","kubernetes.io/config.seen":"2021-08-11T00:53:31.835501949Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-addr [truncated 8113 chars]
	I0811 00:55:00.315228 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:55:00.315239 1439337 round_trippers.go:438] Request Headers:
	I0811 00:55:00.315244 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:55:00.315249 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:55:00.316853 1439337 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0811 00:55:00.316888 1439337 round_trippers.go:460] Response Headers:
	I0811 00:55:00.316921 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:55:00.316940 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:55:00.316959 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:55:00 GMT
	I0811 00:55:00.316964 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:55:00.316968 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:55:00.317095 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"498","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5272 chars]
	I0811 00:55:00.317350 1439337 pod_ready.go:92] pod "kube-apiserver-multinode-20210811005307-1387367" in "kube-system" namespace has status "Ready":"True"
	I0811 00:55:00.317365 1439337 pod_ready.go:81] duration metric: took 4.452945ms waiting for pod "kube-apiserver-multinode-20210811005307-1387367" in "kube-system" namespace to be "Ready" ...
	I0811 00:55:00.317375 1439337 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-20210811005307-1387367" in "kube-system" namespace to be "Ready" ...
	I0811 00:55:00.317426 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20210811005307-1387367
	I0811 00:55:00.317437 1439337 round_trippers.go:438] Request Headers:
	I0811 00:55:00.317441 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:55:00.317446 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:55:00.319081 1439337 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0811 00:55:00.319109 1439337 round_trippers.go:460] Response Headers:
	I0811 00:55:00.319126 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:55:00.319129 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:55:00 GMT
	I0811 00:55:00.319133 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:55:00.319136 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:55:00.319140 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:55:00.319261 1439337 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20210811005307-1387367","namespace":"kube-system","uid":"f0ca8783-2ede-4c80-adc7-94aa58a85ad1","resourceVersion":"462","creationTimestamp":"2021-08-11T00:53:45Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"cfbf57d2192b91a488c5172bd9546eeb","kubernetes.io/config.mirror":"cfbf57d2192b91a488c5172bd9546eeb","kubernetes.io/config.seen":"2021-08-11T00:53:31.835503352Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/c
onfig.mirror":{},"f:kubernetes.io/config.seen":{},"f:kubernetes.io/conf [truncated 7679 chars]
	I0811 00:55:00.319596 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:55:00.319613 1439337 round_trippers.go:438] Request Headers:
	I0811 00:55:00.319618 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:55:00.319622 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:55:00.321381 1439337 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0811 00:55:00.321401 1439337 round_trippers.go:460] Response Headers:
	I0811 00:55:00.321406 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:55:00.321409 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:55:00.321413 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:55:00.321435 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:55:00.321439 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:55:00 GMT
	I0811 00:55:00.321750 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"498","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5272 chars]
	I0811 00:55:00.322017 1439337 pod_ready.go:92] pod "kube-controller-manager-multinode-20210811005307-1387367" in "kube-system" namespace has status "Ready":"True"
	I0811 00:55:00.322034 1439337 pod_ready.go:81] duration metric: took 4.644993ms waiting for pod "kube-controller-manager-multinode-20210811005307-1387367" in "kube-system" namespace to be "Ready" ...
	I0811 00:55:00.322045 1439337 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-29jgc" in "kube-system" namespace to be "Ready" ...
	I0811 00:55:00.489237 1439337 request.go:600] Waited for 167.13354ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-29jgc
	I0811 00:55:00.489343 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-29jgc
	I0811 00:55:00.489371 1439337 round_trippers.go:438] Request Headers:
	I0811 00:55:00.489384 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:55:00.489390 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:55:00.491649 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:55:00.491686 1439337 round_trippers.go:460] Response Headers:
	I0811 00:55:00.491691 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:55:00.491695 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:55:00.491699 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:55:00.491702 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:55:00.491706 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:55:00 GMT
	I0811 00:55:00.492004 1439337 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-29jgc","generateName":"kube-proxy-","namespace":"kube-system","uid":"4cd8a483-2d40-4f4a-817d-8330332fe9bc","resourceVersion":"578","creationTimestamp":"2021-08-11T00:54:49Z","labels":{"controller-revision-hash":"7cdcb64568","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"37aa45af-7498-4003-abc1-af1fe65a80b1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37aa45af-7498-4003-abc1-af1fe65a80b1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller
":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:affinity":{".": [truncated 5785 chars]
	I0811 00:55:00.688667 1439337 request.go:600] Waited for 196.270465ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367-m02
	I0811 00:55:00.688753 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367-m02
	I0811 00:55:00.688766 1439337 round_trippers.go:438] Request Headers:
	I0811 00:55:00.688814 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:55:00.688827 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:55:00.691188 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:55:00.691209 1439337 round_trippers.go:460] Response Headers:
	I0811 00:55:00.691214 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:55:00.691218 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:55:00.691222 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:55:00.691225 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:55:00.691229 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:55:00 GMT
	I0811 00:55:00.691336 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367-m02","uid":"112846e3-2407-4121-9cea-7dab53ea41fd","resourceVersion":"595","creationTimestamp":"2021-08-11T00:54:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:m
etadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spe [truncated 4508 chars]
	I0811 00:55:00.691595 1439337 pod_ready.go:92] pod "kube-proxy-29jgc" in "kube-system" namespace has status "Ready":"True"
	I0811 00:55:00.691609 1439337 pod_ready.go:81] duration metric: took 369.557115ms waiting for pod "kube-proxy-29jgc" in "kube-system" namespace to be "Ready" ...
	I0811 00:55:00.691619 1439337 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sjx8s" in "kube-system" namespace to be "Ready" ...
	I0811 00:55:00.888988 1439337 request.go:600] Waited for 197.303871ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sjx8s
	I0811 00:55:00.889087 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sjx8s
	I0811 00:55:00.889138 1439337 round_trippers.go:438] Request Headers:
	I0811 00:55:00.889152 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:55:00.889165 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:55:00.891617 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:55:00.891635 1439337 round_trippers.go:460] Response Headers:
	I0811 00:55:00.891640 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:55:00.891643 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:55:00.891647 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:55:00 GMT
	I0811 00:55:00.891653 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:55:00.891657 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:55:00.891972 1439337 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-sjx8s","generateName":"kube-proxy-","namespace":"kube-system","uid":"b7a97e6a-09fd-4f56-9ee7-9ebd40c689f7","resourceVersion":"482","creationTimestamp":"2021-08-11T00:54:00Z","labels":{"controller-revision-hash":"7cdcb64568","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"37aa45af-7498-4003-abc1-af1fe65a80b1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37aa45af-7498-4003-abc1-af1fe65a80b1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller
":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:affinity":{".": [truncated 5777 chars]
	I0811 00:55:01.088728 1439337 request.go:600] Waited for 196.322674ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:55:01.088801 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:55:01.088812 1439337 round_trippers.go:438] Request Headers:
	I0811 00:55:01.088821 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:55:01.088873 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:55:01.091359 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:55:01.091413 1439337 round_trippers.go:460] Response Headers:
	I0811 00:55:01.091431 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:55:01 GMT
	I0811 00:55:01.091447 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:55:01.091464 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:55:01.091492 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:55:01.091497 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:55:01.091604 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"498","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5272 chars]
	I0811 00:55:01.091881 1439337 pod_ready.go:92] pod "kube-proxy-sjx8s" in "kube-system" namespace has status "Ready":"True"
	I0811 00:55:01.091897 1439337 pod_ready.go:81] duration metric: took 400.265134ms waiting for pod "kube-proxy-sjx8s" in "kube-system" namespace to be "Ready" ...
	I0811 00:55:01.091908 1439337 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-20210811005307-1387367" in "kube-system" namespace to be "Ready" ...
	I0811 00:55:01.288291 1439337 request.go:600] Waited for 196.314683ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20210811005307-1387367
	I0811 00:55:01.288402 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20210811005307-1387367
	I0811 00:55:01.288414 1439337 round_trippers.go:438] Request Headers:
	I0811 00:55:01.288420 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:55:01.288426 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:55:01.290703 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:55:01.290752 1439337 round_trippers.go:460] Response Headers:
	I0811 00:55:01.290769 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:55:01.290786 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:55:01.290807 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:55:01.290831 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:55:01 GMT
	I0811 00:55:01.290841 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:55:01.290941 1439337 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-20210811005307-1387367","namespace":"kube-system","uid":"7a24d14d-4566-4ab3-a237-634064615837","resourceVersion":"476","creationTimestamp":"2021-08-11T00:53:51Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"215965f927d1bdc023cfbcf159bba72a","kubernetes.io/config.mirror":"215965f927d1bdc023cfbcf159bba72a","kubernetes.io/config.seen":"2021-08-11T00:53:47.643889688Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-11T00:54:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"
f:kubernetes.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f: [truncated 4561 chars]
	I0811 00:55:01.488440 1439337 request.go:600] Waited for 197.166247ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:55:01.488506 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210811005307-1387367
	I0811 00:55:01.488517 1439337 round_trippers.go:438] Request Headers:
	I0811 00:55:01.488541 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:55:01.488546 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:55:01.490746 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:55:01.490767 1439337 round_trippers.go:460] Response Headers:
	I0811 00:55:01.490773 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:55:01.490777 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:55:01.490780 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:55:01.490784 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:55:01.490799 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:55:01 GMT
	I0811 00:55:01.491254 1439337 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"498","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"m
anager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-0 [truncated 5272 chars]
	I0811 00:55:01.491582 1439337 pod_ready.go:92] pod "kube-scheduler-multinode-20210811005307-1387367" in "kube-system" namespace has status "Ready":"True"
	I0811 00:55:01.491595 1439337 pod_ready.go:81] duration metric: took 399.677986ms waiting for pod "kube-scheduler-multinode-20210811005307-1387367" in "kube-system" namespace to be "Ready" ...
	I0811 00:55:01.491608 1439337 pod_ready.go:38] duration metric: took 1.20079796s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0811 00:55:01.491633 1439337 system_svc.go:44] waiting for kubelet service to be running ....
	I0811 00:55:01.491696 1439337 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0811 00:55:01.501953 1439337 system_svc.go:56] duration metric: took 10.31352ms WaitForService to wait for kubelet.
	I0811 00:55:01.501981 1439337 kubeadm.go:547] duration metric: took 10.731690233s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0811 00:55:01.502016 1439337 node_conditions.go:102] verifying NodePressure condition ...
	I0811 00:55:01.688350 1439337 request.go:600] Waited for 186.238183ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0811 00:55:01.688416 1439337 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes
	I0811 00:55:01.688426 1439337 round_trippers.go:438] Request Headers:
	I0811 00:55:01.688432 1439337 round_trippers.go:442]     Accept: application/json, */*
	I0811 00:55:01.688439 1439337 round_trippers.go:442]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 00:55:01.690977 1439337 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0811 00:55:01.691050 1439337 round_trippers.go:460] Response Headers:
	I0811 00:55:01.691073 1439337 round_trippers.go:463]     Date: Wed, 11 Aug 2021 00:55:01 GMT
	I0811 00:55:01.691089 1439337 round_trippers.go:463]     Cache-Control: no-cache, private
	I0811 00:55:01.691103 1439337 round_trippers.go:463]     Content-Type: application/json
	I0811 00:55:01.691126 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 4301b1e2-e6b4-45c6-ae9c-f53a3f0c066a
	I0811 00:55:01.691150 1439337 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: eded9f2b-66d1-41b4-a367-847e77895237
	I0811 00:55:01.691350 1439337 request.go:1123] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"596"},"items":[{"metadata":{"name":"multinode-20210811005307-1387367","uid":"fa35bfa5-c473-4e85-b841-1cb2a4b321c5","resourceVersion":"498","creationTimestamp":"2021-08-11T00:53:44Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-20210811005307-1387367","kubernetes.io/os":"linux","minikube.k8s.io/commit":"877a5691753f15214a0c269ac69dcdc5a4d99fcd","minikube.k8s.io/name":"multinode-20210811005307-1387367","minikube.k8s.io/updated_at":"2021_08_11T00_53_48_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-
managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","o [truncated 10825 chars]
	I0811 00:55:01.691786 1439337 node_conditions.go:122] node storage ephemeral capacity is 60796312Ki
	I0811 00:55:01.691808 1439337 node_conditions.go:123] node cpu capacity is 2
	I0811 00:55:01.691819 1439337 node_conditions.go:122] node storage ephemeral capacity is 60796312Ki
	I0811 00:55:01.691828 1439337 node_conditions.go:123] node cpu capacity is 2
	I0811 00:55:01.691836 1439337 node_conditions.go:105] duration metric: took 189.808795ms to run NodePressure ...
	I0811 00:55:01.691848 1439337 start.go:231] waiting for startup goroutines ...
	I0811 00:55:01.758927 1439337 start.go:462] kubectl: 1.21.3, cluster: 1.21.3 (minor skew: 0)
	I0811 00:55:01.763608 1439337 out.go:177] * Done! kubectl is now configured to use "multinode-20210811005307-1387367" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2021-08-11 00:53:09 UTC, end at Wed 2021-08-11 01:05:09 UTC. --
	Aug 11 00:53:20 multinode-20210811005307-1387367 dockerd[456]: time="2021-08-11T00:53:20.135649184Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Aug 11 00:53:20 multinode-20210811005307-1387367 dockerd[456]: time="2021-08-11T00:53:20.135683817Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Aug 11 00:53:20 multinode-20210811005307-1387367 dockerd[456]: time="2021-08-11T00:53:20.135701171Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Aug 11 00:53:20 multinode-20210811005307-1387367 dockerd[456]: time="2021-08-11T00:53:20.135711485Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Aug 11 00:53:20 multinode-20210811005307-1387367 dockerd[456]: time="2021-08-11T00:53:20.145258347Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Aug 11 00:53:20 multinode-20210811005307-1387367 dockerd[456]: time="2021-08-11T00:53:20.151810323Z" level=warning msg="Your kernel does not support CPU realtime scheduler"
	Aug 11 00:53:20 multinode-20210811005307-1387367 dockerd[456]: time="2021-08-11T00:53:20.151841059Z" level=warning msg="Your kernel does not support cgroup blkio weight"
	Aug 11 00:53:20 multinode-20210811005307-1387367 dockerd[456]: time="2021-08-11T00:53:20.151848214Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
	Aug 11 00:53:20 multinode-20210811005307-1387367 dockerd[456]: time="2021-08-11T00:53:20.152001690Z" level=info msg="Loading containers: start."
	Aug 11 00:53:20 multinode-20210811005307-1387367 dockerd[456]: time="2021-08-11T00:53:20.363418038Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 11 00:53:20 multinode-20210811005307-1387367 dockerd[456]: time="2021-08-11T00:53:20.449682669Z" level=info msg="Loading containers: done."
	Aug 11 00:53:20 multinode-20210811005307-1387367 dockerd[456]: time="2021-08-11T00:53:20.467840513Z" level=info msg="Docker daemon" commit=b0f5bc3 graphdriver(s)=overlay2 version=20.10.7
	Aug 11 00:53:20 multinode-20210811005307-1387367 dockerd[456]: time="2021-08-11T00:53:20.467916632Z" level=info msg="Daemon has completed initialization"
	Aug 11 00:53:20 multinode-20210811005307-1387367 systemd[1]: Started Docker Application Container Engine.
	Aug 11 00:53:20 multinode-20210811005307-1387367 dockerd[456]: time="2021-08-11T00:53:20.517120905Z" level=info msg="API listen on [::]:2376"
	Aug 11 00:53:20 multinode-20210811005307-1387367 dockerd[456]: time="2021-08-11T00:53:20.519425777Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 11 00:55:04 multinode-20210811005307-1387367 dockerd[456]: time="2021-08-11T00:55:04.874364100Z" level=error msg="Not continuing with pull after error: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Aug 11 00:55:04 multinode-20210811005307-1387367 dockerd[456]: time="2021-08-11T00:55:04.876583138Z" level=error msg="Handler for POST /v1.41/images/create returned error: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Aug 11 00:55:19 multinode-20210811005307-1387367 dockerd[456]: time="2021-08-11T00:55:19.394093774Z" level=error msg="Not continuing with pull after error: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Aug 11 00:55:19 multinode-20210811005307-1387367 dockerd[456]: time="2021-08-11T00:55:19.396465664Z" level=error msg="Handler for POST /v1.41/images/create returned error: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Aug 11 00:55:44 multinode-20210811005307-1387367 dockerd[456]: time="2021-08-11T00:55:44.322643326Z" level=error msg="Not continuing with pull after error: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Aug 11 00:55:44 multinode-20210811005307-1387367 dockerd[456]: time="2021-08-11T00:55:44.325445564Z" level=error msg="Handler for POST /v1.41/images/create returned error: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Aug 11 00:56:27 multinode-20210811005307-1387367 dockerd[456]: time="2021-08-11T00:56:27.444963234Z" level=error msg="Not continuing with pull after error: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Aug 11 00:57:54 multinode-20210811005307-1387367 dockerd[456]: time="2021-08-11T00:57:54.516587981Z" level=error msg="Not continuing with pull after error: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Aug 11 01:00:44 multinode-20210811005307-1387367 dockerd[456]: time="2021-08-11T01:00:44.722322503Z" level=error msg="Not continuing with pull after error: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                      CREATED             STATE               NAME                      ATTEMPT             POD ID
	fb85947729a5e       1a1f05a2cd7c2                                                                              10 minutes ago      Running             coredns                   0                   3c3987c621535
	e7872a86c850c       ba04bb24b9575                                                                              10 minutes ago      Running             storage-provisioner       0                   36e2ef011ff7e
	cfc02224ae9dd       kindest/kindnetd@sha256:838bc1706e38391aefaa31fd52619fe8e57ad3dfb0d0ff414d902367fcc24c3c   11 minutes ago      Running             kindnet-cni               0                   c3e92f3f028a3
	e36db61f87b4d       4ea38350a1beb                                                                              11 minutes ago      Running             kube-proxy                0                   14679d4e451ff
	1c09dc0ad10ec       cb310ff289d79                                                                              11 minutes ago      Running             kube-controller-manager   0                   4e6e27aeb111d
	9e13d13bead3b       31a3b96cefc1e                                                                              11 minutes ago      Running             kube-scheduler            0                   4ec36aab9e5f6
	bfe8629569ccb       44a6d50ef170d                                                                              11 minutes ago      Running             kube-apiserver            0                   c7ed64fb2f162
	678b20fb70dc2       05b738aa1bc63                                                                              11 minutes ago      Running             etcd                      0                   2d1df52c9254e
	
	* 
	* ==> coredns [fb85947729a5] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
	CoreDNS-1.8.0
	linux/arm64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-20210811005307-1387367
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-20210811005307-1387367
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=877a5691753f15214a0c269ac69dcdc5a4d99fcd
	                    minikube.k8s.io/name=multinode-20210811005307-1387367
	                    minikube.k8s.io/updated_at=2021_08_11T00_53_48_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 11 Aug 2021 00:53:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-20210811005307-1387367
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 11 Aug 2021 01:05:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 11 Aug 2021 01:04:25 +0000   Wed, 11 Aug 2021 00:53:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 11 Aug 2021 01:04:25 +0000   Wed, 11 Aug 2021 00:53:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 11 Aug 2021 01:04:25 +0000   Wed, 11 Aug 2021 00:53:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 11 Aug 2021 01:04:25 +0000   Wed, 11 Aug 2021 00:54:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    multinode-20210811005307-1387367
	Capacity:
	  cpu:                2
	  ephemeral-storage:  60796312Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8033460Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  60796312Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8033460Ki
	  pods:               110
	System Info:
	  Machine ID:                 80c525a0c99c4bf099c0cbf9c365b032
	  System UUID:                a9502501-581c-4295-8dea-fb7a922e5304
	  Boot ID:                    dff2c102-a0cf-4fb0-a2ea-36617f3a3229
	  Kernel Version:             5.8.0-1041-aws
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.7
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-84b6686758-2jxsd                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 coredns-558bd4d5db-lpxc6                                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     11m
	  kube-system                 etcd-multinode-20210811005307-1387367                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-xqj59                                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      11m
	  kube-system                 kube-apiserver-multinode-20210811005307-1387367             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-multinode-20210811005307-1387367    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-sjx8s                                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-multinode-20210811005307-1387367             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 storage-provisioner                                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  NodeHasSufficientMemory  11m (x4 over 11m)  kubelet     Node multinode-20210811005307-1387367 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x4 over 11m)  kubelet     Node multinode-20210811005307-1387367 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x3 over 11m)  kubelet     Node multinode-20210811005307-1387367 status is now: NodeHasSufficientPID
	  Normal  Starting                 11m                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m                kubelet     Node multinode-20210811005307-1387367 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m                kubelet     Node multinode-20210811005307-1387367 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m                kubelet     Node multinode-20210811005307-1387367 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 11m                kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                10m                kubelet     Node multinode-20210811005307-1387367 status is now: NodeReady
	
	
	Name:               multinode-20210811005307-1387367-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-20210811005307-1387367-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 11 Aug 2021 00:54:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-20210811005307-1387367-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 11 Aug 2021 01:05:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 11 Aug 2021 01:00:21 +0000   Wed, 11 Aug 2021 00:54:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 11 Aug 2021 01:00:21 +0000   Wed, 11 Aug 2021 00:54:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 11 Aug 2021 01:00:21 +0000   Wed, 11 Aug 2021 00:54:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 11 Aug 2021 01:00:21 +0000   Wed, 11 Aug 2021 00:54:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    multinode-20210811005307-1387367-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  60796312Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8033460Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  60796312Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8033460Ki
	  pods:               110
	System Info:
	  Machine ID:                 80c525a0c99c4bf099c0cbf9c365b032
	  System UUID:                62b75c9b-274b-4e0a-a6bc-ecf3fccdcede
	  Boot ID:                    dff2c102-a0cf-4fb0-a2ea-36617f3a3229
	  Kernel Version:             5.8.0-1041-aws
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.7
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-84b6686758-c9mqs    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-bsbng               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      10m
	  kube-system                 kube-proxy-29jgc            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 10m                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x2 over 10m)  kubelet     Node multinode-20210811005307-1387367-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet     Node multinode-20210811005307-1387367-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x2 over 10m)  kubelet     Node multinode-20210811005307-1387367-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 10m                kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                10m                kubelet     Node multinode-20210811005307-1387367-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001093] FS-Cache: O-key=[8] '38a8010000000000'
	[  +0.000822] FS-Cache: N-cookie c=00000000aef8ae5b [p=00000000d00a921d fl=2 nc=0 na=1]
	[  +0.001337] FS-Cache: N-cookie d=00000000d0f41ca1 n=00000000cf2b9e77
	[  +0.001079] FS-Cache: N-key=[8] '38a8010000000000'
	[  +0.008061] FS-Cache: Duplicate cookie detected
	[  +0.000824] FS-Cache: O-cookie c=000000009e8af87d [p=00000000d00a921d fl=226 nc=0 na=1]
	[  +0.001345] FS-Cache: O-cookie d=00000000d0f41ca1 n=00000000882d24dd
	[  +0.001078] FS-Cache: O-key=[8] '38a8010000000000'
	[  +0.000828] FS-Cache: N-cookie c=00000000aef8ae5b [p=00000000d00a921d fl=2 nc=0 na=1]
	[  +0.001344] FS-Cache: N-cookie d=00000000d0f41ca1 n=000000006ce4882d
	[  +0.001069] FS-Cache: N-key=[8] '38a8010000000000'
	[  +1.509820] FS-Cache: Duplicate cookie detected
	[  +0.000799] FS-Cache: O-cookie c=00000000e1eedaf3 [p=00000000d00a921d fl=226 nc=0 na=1]
	[  +0.001318] FS-Cache: O-cookie d=00000000d0f41ca1 n=0000000025fbee24
	[  +0.001053] FS-Cache: O-key=[8] '37a8010000000000'
	[  +0.000829] FS-Cache: N-cookie c=000000006f83a19d [p=00000000d00a921d fl=2 nc=0 na=1]
	[  +0.001316] FS-Cache: N-cookie d=00000000d0f41ca1 n=00000000d322ea0c
	[  +0.001048] FS-Cache: N-key=[8] '37a8010000000000'
	[  +0.277640] FS-Cache: Duplicate cookie detected
	[  +0.000818] FS-Cache: O-cookie c=000000007ae3c387 [p=00000000d00a921d fl=226 nc=0 na=1]
	[  +0.001327] FS-Cache: O-cookie d=00000000d0f41ca1 n=000000004bd4688e
	[  +0.001069] FS-Cache: O-key=[8] '3ca8010000000000'
	[  +0.000853] FS-Cache: N-cookie c=0000000007642642 [p=00000000d00a921d fl=2 nc=0 na=1]
	[  +0.001309] FS-Cache: N-cookie d=00000000d0f41ca1 n=00000000ae88504f
	[  +0.001071] FS-Cache: N-key=[8] '3ca8010000000000'
	
	* 
	* ==> etcd [678b20fb70dc] <==
	* 2021-08-11 01:01:27.333649 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 01:01:37.333163 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 01:01:47.333074 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 01:01:57.333558 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 01:02:07.333606 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 01:02:17.333458 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 01:02:27.333453 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 01:02:37.333319 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 01:02:47.332909 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 01:02:57.333242 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 01:03:07.333610 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 01:03:17.332988 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 01:03:27.333414 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 01:03:37.333633 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 01:03:38.190399 I | mvcc: store.index: compact 860
	2021-08-11 01:03:38.191626 I | mvcc: finished scheduled compaction at 860 (took 900.376µs)
	2021-08-11 01:03:47.333391 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 01:03:57.333235 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 01:04:07.333247 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 01:04:17.333255 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 01:04:27.333756 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 01:04:37.333644 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 01:04:47.333119 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 01:04:57.333047 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 01:05:07.333762 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  01:05:09 up 10:47,  0 users,  load average: 0.22, 0.50, 1.07
	Linux multinode-20210811005307-1387367 5.8.0-1041-aws #43~20.04.1-Ubuntu SMP Thu Jul 15 11:03:27 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [bfe8629569cc] <==
	* I0811 00:59:49.371119       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0811 01:00:24.886906       1 client.go:360] parsed scheme: "passthrough"
	I0811 01:00:24.886972       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0811 01:00:24.886983       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0811 01:01:01.929336       1 client.go:360] parsed scheme: "passthrough"
	I0811 01:01:01.929387       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0811 01:01:01.929396       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0811 01:01:34.564430       1 client.go:360] parsed scheme: "passthrough"
	I0811 01:01:34.564478       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0811 01:01:34.564488       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0811 01:02:17.163381       1 client.go:360] parsed scheme: "passthrough"
	I0811 01:02:17.163600       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0811 01:02:17.163680       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0811 01:03:01.989553       1 client.go:360] parsed scheme: "passthrough"
	I0811 01:03:01.989606       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0811 01:03:01.989616       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0811 01:03:37.681059       1 client.go:360] parsed scheme: "passthrough"
	I0811 01:03:37.681107       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0811 01:03:37.681116       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0811 01:04:22.572725       1 client.go:360] parsed scheme: "passthrough"
	I0811 01:04:22.572935       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0811 01:04:22.572957       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0811 01:05:04.824051       1 client.go:360] parsed scheme: "passthrough"
	I0811 01:05:04.824098       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0811 01:05:04.824353       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-controller-manager [1c09dc0ad10e] <==
	* I0811 00:53:59.902929       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0811 00:53:59.911433       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0811 00:54:00.234119       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-558bd4d5db to 2"
	I0811 00:54:00.349567       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0811 00:54:00.413791       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-558bd4d5db to 1"
	I0811 00:54:00.417240       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0811 00:54:00.417308       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0811 00:54:00.633507       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-sjx8s"
	I0811 00:54:00.633542       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-xqj59"
	I0811 00:54:00.684580       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-ckrfd"
	I0811 00:54:00.696172       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-lpxc6"
	I0811 00:54:00.741742       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-558bd4d5db-ckrfd"
	I0811 00:54:24.605204       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	W0811 00:54:49.643184       1 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-20210811005307-1387367-m02" does not exist
	I0811 00:54:49.670595       1 range_allocator.go:373] Set node multinode-20210811005307-1387367-m02 PodCIDR to [10.244.1.0/24]
	I0811 00:54:49.689655       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-bsbng"
	I0811 00:54:49.691241       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-29jgc"
	E0811 00:54:49.730169       1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"77b04802-67f5-4c63-bfa7-7aafef47aa03", ResourceVersion:"491", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764240028, loc:(*time.Location)(0x6704c20)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"k
indnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"kindest/kindnetd:v20210326-1e038dc5\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists
\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.mk\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x40021174d0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x40021174e8)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4002117500), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4002117518)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x40014bc940), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, Crea
tionTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4002117530), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.Flex
VolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4002117548), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVo
lumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CS
IVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4002117560), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*
v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"kindest/kindnetd:v20210326-1e038dc5", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x40014bc960)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x40014bc9a0)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amou
nt{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropa
gation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4002139140), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x400214a708), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4000174690), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(ni
l), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x400214d5a0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x400214a750)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:1, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:1, ObservedGeneration:1, UpdatedNumberScheduled:1, NumberAvailable:1, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetConditio
n(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	E0811 00:54:49.754326       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"37aa45af-7498-4003-abc1-af1fe65a80b1", ResourceVersion:"483", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764240027, loc:(*time.Location)(0x6704c20)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4002007ba8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4002007bc0)}, v1.ManagedFieldsEntry{Manager:"kube-co
ntroller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4002007bd8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4002007bf0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x40013b8460), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElastic
BlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x40020cbc40), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSour
ce)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4002007c08), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSo
urce)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4002007c20), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil),
Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.21.3", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil),
WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x40013b84a0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"F
ile", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4002188960), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x40020db9f8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4000176460), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)
(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x40021a0040)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x40020dba48)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:1, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:1, ObservedGeneration:1, UpdatedNumberScheduled:1, NumberAvailable:1, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest ve
rsion and try again
	W0811 00:54:54.608706       1 node_lifecycle_controller.go:1013] Missing timestamp for Node multinode-20210811005307-1387367-m02. Assuming now as a timestamp.
	I0811 00:54:54.609043       1 event.go:291] "Event occurred" object="multinode-20210811005307-1387367-m02" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-20210811005307-1387367-m02 event: Registered Node multinode-20210811005307-1387367-m02 in Controller"
	I0811 00:55:02.961045       1 event.go:291] "Event occurred" object="default/busybox" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-84b6686758 to 2"
	I0811 00:55:02.978990       1 event.go:291] "Event occurred" object="default/busybox-84b6686758" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-84b6686758-c9mqs"
	I0811 00:55:03.005311       1 event.go:291] "Event occurred" object="default/busybox-84b6686758" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-84b6686758-2jxsd"
	I0811 00:55:04.623493       1 event.go:291] "Event occurred" object="default/busybox-84b6686758-c9mqs" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-84b6686758-c9mqs"
	
	* 
	* ==> kube-proxy [e36db61f87b4] <==
	* I0811 00:54:03.259878       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0811 00:54:03.259951       1 server_others.go:140] Detected node IP 192.168.49.2
	W0811 00:54:03.259984       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0811 00:54:03.291493       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0811 00:54:03.292302       1 server_others.go:212] Using iptables Proxier.
	I0811 00:54:03.292332       1 server_others.go:219] creating dualStackProxier for iptables.
	W0811 00:54:03.292344       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0811 00:54:03.292836       1 server.go:643] Version: v1.21.3
	I0811 00:54:03.294255       1 config.go:315] Starting service config controller
	I0811 00:54:03.294285       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0811 00:54:03.294364       1 config.go:224] Starting endpoint slice config controller
	I0811 00:54:03.294378       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0811 00:54:03.307553       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0811 00:54:03.310029       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0811 00:54:03.394503       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0811 00:54:03.394677       1 shared_informer.go:247] Caches are synced for service config 
	W0811 01:00:34.311069       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	
	* 
	* ==> kube-scheduler [9e13d13bead3] <==
	* W0811 00:53:44.289970       1 authentication.go:337] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0811 00:53:44.290047       1 authentication.go:338] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0811 00:53:44.290103       1 authentication.go:339] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0811 00:53:44.370326       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0811 00:53:44.370404       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0811 00:53:44.370410       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0811 00:53:44.370422       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0811 00:53:44.383954       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0811 00:53:44.384202       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0811 00:53:44.384370       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0811 00:53:44.384546       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0811 00:53:44.384722       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0811 00:53:44.384882       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0811 00:53:44.385053       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0811 00:53:44.385235       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0811 00:53:44.390020       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0811 00:53:44.390179       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0811 00:53:44.390349       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0811 00:53:44.390461       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0811 00:53:44.390510       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0811 00:53:44.405168       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0811 00:53:45.204983       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0811 00:53:45.283915       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0811 00:53:45.422581       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0811 00:53:45.970994       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2021-08-11 00:53:09 UTC, end at Wed 2021-08-11 01:05:09 UTC. --
	Aug 11 01:00:27 multinode-20210811005307-1387367 kubelet[2325]: E0811 01:00:27.043766    2325 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:1.28\\\"\"" pod="default/busybox-84b6686758-2jxsd" podUID=38a2c1c7-063c-4e65-9056-8e76fd707dd5
	Aug 11 01:00:44 multinode-20210811005307-1387367 kubelet[2325]: E0811 01:00:44.725721    2325 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="busybox:1.28"
	Aug 11 01:00:44 multinode-20210811005307-1387367 kubelet[2325]: E0811 01:00:44.725768    2325 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="busybox:1.28"
	Aug 11 01:00:44 multinode-20210811005307-1387367 kubelet[2325]: E0811 01:00:44.725871    2325 kuberuntime_manager.go:864] container &Container{Name:busybox,Image:busybox:1.28,Command:[sleep 3600],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m5zfg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod busybox-84b6686758-2jxsd_default(38a2c1c7-063c-4e65-9056-8e76fd707dd5): ErrImagePull: rpc error: code = Unknown desc = toomanyrequests: You have reached your pull rate limit. You may increas
e the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Aug 11 01:00:44 multinode-20210811005307-1387367 kubelet[2325]: E0811 01:00:44.726221    2325 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ErrImagePull: \"rpc error: code = Unknown desc = toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/busybox-84b6686758-2jxsd" podUID=38a2c1c7-063c-4e65-9056-8e76fd707dd5
	Aug 11 01:00:59 multinode-20210811005307-1387367 kubelet[2325]: E0811 01:00:59.044632    2325 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:1.28\\\"\"" pod="default/busybox-84b6686758-2jxsd" podUID=38a2c1c7-063c-4e65-9056-8e76fd707dd5
	Aug 11 01:01:13 multinode-20210811005307-1387367 kubelet[2325]: E0811 01:01:13.044560    2325 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:1.28\\\"\"" pod="default/busybox-84b6686758-2jxsd" podUID=38a2c1c7-063c-4e65-9056-8e76fd707dd5
	Aug 11 01:01:24 multinode-20210811005307-1387367 kubelet[2325]: E0811 01:01:24.044964    2325 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:1.28\\\"\"" pod="default/busybox-84b6686758-2jxsd" podUID=38a2c1c7-063c-4e65-9056-8e76fd707dd5
	Aug 11 01:01:36 multinode-20210811005307-1387367 kubelet[2325]: E0811 01:01:36.044645    2325 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:1.28\\\"\"" pod="default/busybox-84b6686758-2jxsd" podUID=38a2c1c7-063c-4e65-9056-8e76fd707dd5
	Aug 11 01:01:50 multinode-20210811005307-1387367 kubelet[2325]: E0811 01:01:50.044666    2325 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:1.28\\\"\"" pod="default/busybox-84b6686758-2jxsd" podUID=38a2c1c7-063c-4e65-9056-8e76fd707dd5
	Aug 11 01:02:04 multinode-20210811005307-1387367 kubelet[2325]: E0811 01:02:04.044171    2325 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:1.28\\\"\"" pod="default/busybox-84b6686758-2jxsd" podUID=38a2c1c7-063c-4e65-9056-8e76fd707dd5
	Aug 11 01:02:19 multinode-20210811005307-1387367 kubelet[2325]: E0811 01:02:19.044201    2325 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:1.28\\\"\"" pod="default/busybox-84b6686758-2jxsd" podUID=38a2c1c7-063c-4e65-9056-8e76fd707dd5
	Aug 11 01:02:30 multinode-20210811005307-1387367 kubelet[2325]: E0811 01:02:30.044527    2325 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:1.28\\\"\"" pod="default/busybox-84b6686758-2jxsd" podUID=38a2c1c7-063c-4e65-9056-8e76fd707dd5
	Aug 11 01:02:41 multinode-20210811005307-1387367 kubelet[2325]: E0811 01:02:41.044532    2325 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:1.28\\\"\"" pod="default/busybox-84b6686758-2jxsd" podUID=38a2c1c7-063c-4e65-9056-8e76fd707dd5
	Aug 11 01:02:52 multinode-20210811005307-1387367 kubelet[2325]: E0811 01:02:52.044668    2325 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:1.28\\\"\"" pod="default/busybox-84b6686758-2jxsd" podUID=38a2c1c7-063c-4e65-9056-8e76fd707dd5
	Aug 11 01:03:04 multinode-20210811005307-1387367 kubelet[2325]: E0811 01:03:04.044598    2325 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:1.28\\\"\"" pod="default/busybox-84b6686758-2jxsd" podUID=38a2c1c7-063c-4e65-9056-8e76fd707dd5
	Aug 11 01:03:17 multinode-20210811005307-1387367 kubelet[2325]: E0811 01:03:17.045070    2325 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:1.28\\\"\"" pod="default/busybox-84b6686758-2jxsd" podUID=38a2c1c7-063c-4e65-9056-8e76fd707dd5
	Aug 11 01:03:28 multinode-20210811005307-1387367 kubelet[2325]: E0811 01:03:28.052305    2325 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:1.28\\\"\"" pod="default/busybox-84b6686758-2jxsd" podUID=38a2c1c7-063c-4e65-9056-8e76fd707dd5
	Aug 11 01:03:43 multinode-20210811005307-1387367 kubelet[2325]: E0811 01:03:43.044745    2325 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:1.28\\\"\"" pod="default/busybox-84b6686758-2jxsd" podUID=38a2c1c7-063c-4e65-9056-8e76fd707dd5
	Aug 11 01:03:56 multinode-20210811005307-1387367 kubelet[2325]: E0811 01:03:56.044691    2325 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:1.28\\\"\"" pod="default/busybox-84b6686758-2jxsd" podUID=38a2c1c7-063c-4e65-9056-8e76fd707dd5
	Aug 11 01:04:09 multinode-20210811005307-1387367 kubelet[2325]: E0811 01:04:09.044891    2325 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:1.28\\\"\"" pod="default/busybox-84b6686758-2jxsd" podUID=38a2c1c7-063c-4e65-9056-8e76fd707dd5
	Aug 11 01:04:24 multinode-20210811005307-1387367 kubelet[2325]: E0811 01:04:24.044752    2325 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:1.28\\\"\"" pod="default/busybox-84b6686758-2jxsd" podUID=38a2c1c7-063c-4e65-9056-8e76fd707dd5
	Aug 11 01:04:38 multinode-20210811005307-1387367 kubelet[2325]: E0811 01:04:38.044534    2325 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:1.28\\\"\"" pod="default/busybox-84b6686758-2jxsd" podUID=38a2c1c7-063c-4e65-9056-8e76fd707dd5
	Aug 11 01:04:50 multinode-20210811005307-1387367 kubelet[2325]: E0811 01:04:50.044483    2325 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:1.28\\\"\"" pod="default/busybox-84b6686758-2jxsd" podUID=38a2c1c7-063c-4e65-9056-8e76fd707dd5
	Aug 11 01:05:01 multinode-20210811005307-1387367 kubelet[2325]: E0811 01:05:01.044558    2325 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:1.28\\\"\"" pod="default/busybox-84b6686758-2jxsd" podUID=38a2c1c7-063c-4e65-9056-8e76fd707dd5
	
	* 
	* ==> storage-provisioner [e7872a86c850] <==
	* I0811 00:54:27.010627       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0811 00:54:27.028649       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0811 00:54:27.028695       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0811 00:54:27.056687       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0811 00:54:27.056856       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-20210811005307-1387367_ef6970d5-57b3-408b-8903-f5d4b1b25dac!
	I0811 00:54:27.057780       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"dfb4bebd-3736-4b46-8595-d59a34df22f5", APIVersion:"v1", ResourceVersion:"510", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-20210811005307-1387367_ef6970d5-57b3-408b-8903-f5d4b1b25dac became leader
	I0811 00:54:27.157477       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-20210811005307-1387367_ef6970d5-57b3-408b-8903-f5d4b1b25dac!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p multinode-20210811005307-1387367 -n multinode-20210811005307-1387367
helpers_test.go:262: (dbg) Run:  kubectl --context multinode-20210811005307-1387367 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:268: non-running pods: busybox-84b6686758-2jxsd busybox-84b6686758-c9mqs
helpers_test.go:270: ======> post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: describe non-running pods <======
helpers_test.go:273: (dbg) Run:  kubectl --context multinode-20210811005307-1387367 describe pod busybox-84b6686758-2jxsd busybox-84b6686758-c9mqs
helpers_test.go:278: (dbg) kubectl --context multinode-20210811005307-1387367 describe pod busybox-84b6686758-2jxsd busybox-84b6686758-c9mqs:

                                                
                                                
-- stdout --
	Name:         busybox-84b6686758-2jxsd
	Namespace:    default
	Priority:     0
	Node:         multinode-20210811005307-1387367/192.168.49.2
	Start Time:   Wed, 11 Aug 2021 00:55:03 +0000
	Labels:       app=busybox
	              pod-template-hash=84b6686758
	Annotations:  <none>
	Status:       Pending
	IP:           10.244.0.3
	IPs:
	  IP:           10.244.0.3
	Controlled By:  ReplicaSet/busybox-84b6686758
	Containers:
	  busybox:
	    Container ID:  
	    Image:         busybox:1.28
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-m5zfg (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  kube-api-access-m5zfg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/busybox-84b6686758-2jxsd to multinode-20210811005307-1387367
	  Warning  Failed     9m26s (x3 over 10m)   kubelet            Failed to pull image "busybox:1.28": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   Pulling    8m43s (x4 over 10m)   kubelet            Pulling image "busybox:1.28"
	  Warning  Failed     8m43s (x4 over 10m)   kubelet            Error: ErrImagePull
	  Warning  Failed     8m43s                 kubelet            Failed to pull image "busybox:1.28": rpc error: code = Unknown desc = toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     8m16s (x6 over 10m)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m55s (x21 over 10m)  kubelet            Back-off pulling image "busybox:1.28"
	
	
	Name:         busybox-84b6686758-c9mqs
	Namespace:    default
	Priority:     0
	Node:         multinode-20210811005307-1387367-m02/192.168.49.3
	Start Time:   Wed, 11 Aug 2021 00:55:02 +0000
	Labels:       app=busybox
	              pod-template-hash=84b6686758
	Annotations:  <none>
	Status:       Pending
	IP:           10.244.1.2
	IPs:
	  IP:           10.244.1.2
	Controlled By:  ReplicaSet/busybox-84b6686758
	Containers:
	  busybox:
	    Container ID:  
	    Image:         busybox:1.28
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fqvdr (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  kube-api-access-fqvdr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/busybox-84b6686758-c9mqs to multinode-20210811005307-1387367-m02
	  Normal   Pulling    8m36s (x4 over 10m)   kubelet            Pulling image "busybox:1.28"
	  Warning  Failed     8m36s (x4 over 10m)   kubelet            Failed to pull image "busybox:1.28": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     8m36s (x4 over 10m)   kubelet            Error: ErrImagePull
	  Warning  Failed     8m23s (x6 over 10m)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m57s (x21 over 10m)  kubelet            Back-off pulling image "busybox:1.28"

                                                
                                                
-- /stdout --
helpers_test.go:281: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:282: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (3.13s)

                                                
                                    
x
+
TestScheduledStopUnix (75.75s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-20210811011244-1387367 --memory=2048 --driver=docker  --container-runtime=docker
E0811 01:13:04.809172 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210811004603-1387367/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-20210811011244-1387367 --memory=2048 --driver=docker  --container-runtime=docker: (43.376293581s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-20210811011244-1387367 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-20210811011244-1387367 -n scheduled-stop-20210811011244-1387367
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-20210811011244-1387367 --schedule 8s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-20210811011244-1387367 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-20210811011244-1387367 -n scheduled-stop-20210811011244-1387367
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-20210811011244-1387367
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-20210811011244-1387367 --schedule 5s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-20210811011244-1387367
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-20210811011244-1387367: exit status 3 (1.902856648s)

                                                
                                                
-- stdout --
	scheduled-stop-20210811011244-1387367
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0811 01:13:58.021940 1513135 status.go:374] failed to get storage capacity of /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	E0811 01:13:58.021986 1513135 status.go:258] status error: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port

                                                
                                                
** /stderr **
scheduled_stop_test.go:209: minikube status: exit status 3

                                                
                                                
-- stdout --
	scheduled-stop-20210811011244-1387367
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0811 01:13:58.021940 1513135 status.go:374] failed to get storage capacity of /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	E0811 01:13:58.021986 1513135 status.go:258] status error: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port

                                                
                                                
** /stderr **
panic.go:613: *** TestScheduledStopUnix FAILED at 2021-08-11 01:13:58.025145433 +0000 UTC m=+2662.813380797
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestScheduledStopUnix]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect scheduled-stop-20210811011244-1387367
helpers_test.go:236: (dbg) docker inspect scheduled-stop-20210811011244-1387367:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "98dc2e9733df1c465a850f1f267a36e899378b1ceec5856e910c187294b98138",
	        "Created": "2021-08-11T01:12:45.690269694Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "exited",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 130,
	            "Error": "",
	            "StartedAt": "2021-08-11T01:12:46.14515411Z",
	            "FinishedAt": "2021-08-11T01:13:56.788701583Z"
	        },
	        "Image": "sha256:ba5ae658d5b3f017bdb597cc46a1912d5eed54239e31b777788d204fdcbc4445",
	        "ResolvConfPath": "/var/lib/docker/containers/98dc2e9733df1c465a850f1f267a36e899378b1ceec5856e910c187294b98138/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/98dc2e9733df1c465a850f1f267a36e899378b1ceec5856e910c187294b98138/hostname",
	        "HostsPath": "/var/lib/docker/containers/98dc2e9733df1c465a850f1f267a36e899378b1ceec5856e910c187294b98138/hosts",
	        "LogPath": "/var/lib/docker/containers/98dc2e9733df1c465a850f1f267a36e899378b1ceec5856e910c187294b98138/98dc2e9733df1c465a850f1f267a36e899378b1ceec5856e910c187294b98138-json.log",
	        "Name": "/scheduled-stop-20210811011244-1387367",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "scheduled-stop-20210811011244-1387367:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "scheduled-stop-20210811011244-1387367",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/9c99223f641c3561494ce3459b6792262f10ade30b9f6767301f7314906666d8-init/diff:/var/lib/docker/overlay2/b901673749d4c23cf617379d66c43acbc184f898f580a05fca5568725e6ccb6a/diff:/var/lib/docker/overlay2/3fd19ee2c9d46b2cdb8a592d42d57d9efdba3a556c98f5018ae07caa15606bc4/diff:/var/lib/docker/overlay2/31f547e426e6dfa6ed65e0b7cb851c18e771f23a77868552685aacb2e126dc0a/diff:/var/lib/docker/overlay2/6ae53b304b800757235653c63c7879ae7f05b4d4f0400f7f6fadc53e2059aa5a/diff:/var/lib/docker/overlay2/7702d6ed068e8b454dd11af18cb8cb76986898926e3e3130c2d7f638062de9ee/diff:/var/lib/docker/overlay2/e67b0ce82f4d6c092698530106fa38495aa54b2fe5600ac022386a3d17165948/diff:/var/lib/docker/overlay2/d3ddbdbbe88f3c5a0867637eeb78a22790daa833a6179cdd4690044007911336/diff:/var/lib/docker/overlay2/10c48536a5187dfe63f1c090ec32daef76e852de7cc4a7e7f96a2fa1510314cc/diff:/var/lib/docker/overlay2/2186c26bc131feb045ca64a28e2cc431fed76b32afc3d3587916b98a9af807fe/diff:/var/lib/docker/overlay2/292c9d
aaf6d60ee235c7ac65bfc1b61b9c0d360ebbebcf08ba5efeb1b40de075/diff:/var/lib/docker/overlay2/9bc521e84afeeb62fa312e9eb2afc367bc449dbf66f412e17eb2338f79d6f920/diff:/var/lib/docker/overlay2/b1a93cf97438f068af56026fc52aaa329c46e4cac3d8f91c8d692871adaf451a/diff:/var/lib/docker/overlay2/b8e42d5d9e69e72a11e3cad660b9f29335dfc6cd1b4a6aebdbf5e6f313efe749/diff:/var/lib/docker/overlay2/6a6eaef3ce06d941ce606aaebc530878ce54d24a51c7947ca936a3a6eb4dac16/diff:/var/lib/docker/overlay2/62370bd2a6e35ce796647f79ccf9906147c91e8ceee31e401bdb7842371c6bee/diff:/var/lib/docker/overlay2/e673dacc1c6815100340b85af47aeb90eb5fca87778caec1d728de5b8cc9a36e/diff:/var/lib/docker/overlay2/bd17ea1d8cd8e2f88bd7fb4cee8a097365f6b81efc91f203a0504873fc0916a6/diff:/var/lib/docker/overlay2/d2f15007a2a5c037903647e5dd0d6882903fa163d23087bbd8eadeaf3618377b/diff:/var/lib/docker/overlay2/0bbc7fe1b1d62a2db9b4f402e6bc8781815951ae6df608307fd50a2fde242253/diff:/var/lib/docker/overlay2/d124fa0a0ea67ad0362eec0adf1f3e7cbd885b2cf4c31f83e917d97a09a791af/diff:/var/lib/d
ocker/overlay2/ee74e2f91490ecb544a95b306f1001046f3c4656413878d09be8bf67de7b4c4f/diff:/var/lib/docker/overlay2/4279b3790ea6aeb262c4ecd9cf4aae5beb1430f4fbb599b49ff27d0f7b3a9714/diff:/var/lib/docker/overlay2/b7fd6a0c88249dbf5e233463fbe08559ca287465617e7721977a002204ea3af5/diff:/var/lib/docker/overlay2/c495a83eeda1cf6df33d49341ee01f15738845e6330c0a5b3c29e11fdc4733b0/diff:/var/lib/docker/overlay2/ac747f0260d49943953568bbbe150f3a4f28d70bd82f40d0485ef13b12195044/diff:/var/lib/docker/overlay2/aa98d62ac831ecd60bc1acfa1708c0648c306bb7fa187026b472e9ae5c3364a4/diff:/var/lib/docker/overlay2/34829b132a53df856a1be03aa46565640e20cb075db18bd9775a5055fe0c0b22/diff:/var/lib/docker/overlay2/85a074fe6f79f3ea9d8b2f628355f41bb4f73b398257f8b6659bc171d86a0736/diff:/var/lib/docker/overlay2/c8c145d2e68e655880cd5c8fae8cb9f7cbd6b112f1f64fced224b17d4f60fbc7/diff:/var/lib/docker/overlay2/7480ad16aa2479be3569dd07eca685bc3a37a785e7ff281c448c7ca718cc67c3/diff:/var/lib/docker/overlay2/519f1304b1b8ee2daf8c1b9411f3e46d4fedacc8d6446937321372c4e8d
f2cb9/diff:/var/lib/docker/overlay2/246fcb20bef1dbfdc41186d1b7143566cd571a067830cc3f946b232024c2e85c/diff:/var/lib/docker/overlay2/f5f15e6d497abc56d9a2d901ed821a56e6f3effe2fc8d6c3ef64297faea15179/diff:/var/lib/docker/overlay2/3aa1fb1105e860c53ef63317f6757f9629a4a20f35764d976df2b0f0cee5d4f2/diff:/var/lib/docker/overlay2/765f7cba41acbb266d2cef89f2a76a5659b78c3b075223bf23257ac44acfe177/diff:/var/lib/docker/overlay2/53179410fe05d9ddea0a22ba2c123ca8e75f9c7839c2a64902e411e2bda2de23/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9c99223f641c3561494ce3459b6792262f10ade30b9f6767301f7314906666d8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9c99223f641c3561494ce3459b6792262f10ade30b9f6767301f7314906666d8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9c99223f641c3561494ce3459b6792262f10ade30b9f6767301f7314906666d8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "scheduled-stop-20210811011244-1387367",
	                "Source": "/var/lib/docker/volumes/scheduled-stop-20210811011244-1387367/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "scheduled-stop-20210811011244-1387367",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "scheduled-stop-20210811011244-1387367",
	                "name.minikube.sigs.k8s.io": "scheduled-stop-20210811011244-1387367",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b8d04d0357309282db0fdb7052ef184aab329206cf542982d87c6a9c2573a78d",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {},
	            "SandboxKey": "/var/run/docker/netns/b8d04d035730",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "scheduled-stop-20210811011244-1387367": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "98dc2e9733df",
	                        "scheduled-stop-20210811011244-1387367"
	                    ],
	                    "NetworkID": "df1129035d6e5f4533bff099ec6bc05c640d03b21293da3be2c8084588e7fdec",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-20210811011244-1387367 -n scheduled-stop-20210811011244-1387367
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-20210811011244-1387367 -n scheduled-stop-20210811011244-1387367: exit status 7 (97.280915ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "scheduled-stop-20210811011244-1387367" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:176: Cleaning up "scheduled-stop-20210811011244-1387367" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-20210811011244-1387367
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-20210811011244-1387367: (1.910407418s)
--- FAIL: TestScheduledStopUnix (75.75s)

                                                
                                    
x
+
TestSkaffold (67.14s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:57: (dbg) Run:  /tmp/skaffold.exe915549437 version
skaffold_test.go:61: skaffold version: v1.29.0
skaffold_test.go:64: (dbg) Run:  out/minikube-linux-arm64 start -p skaffold-20210811011400-1387367 --memory=2600 --driver=docker  --container-runtime=docker
skaffold_test.go:64: (dbg) Done: out/minikube-linux-arm64 start -p skaffold-20210811011400-1387367 --memory=2600 --driver=docker  --container-runtime=docker: (43.236343489s)
skaffold_test.go:84: copying out/minikube-linux-arm64 to /home/jenkins/workspace/Docker_Linux_docker_arm64/out/minikube
skaffold_test.go:108: (dbg) Run:  /tmp/skaffold.exe915549437 run --minikube-profile skaffold-20210811011400-1387367 --kube-context skaffold-20210811011400-1387367 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:108: (dbg) Non-zero exit: /tmp/skaffold.exe915549437 run --minikube-profile skaffold-20210811011400-1387367 --kube-context skaffold-20210811011400-1387367 --status-check=true --port-forward=false --interactive=false: exit status 1 (17.416383788s)

                                                
                                                
-- stdout --
	Generating tags...
	 - leeroy-web -> leeroy-web:latest
	 - leeroy-app -> leeroy-app:latest
	Some taggers failed. Rerun with -vdebug for errors.
	Checking cache...
	 - leeroy-web: Not found. Building
	 - leeroy-app: Not found. Building
	Starting build...
	Found [skaffold-20210811011400-1387367] context, using local docker daemon.
	Building [leeroy-app]...
	Sending build context to Docker daemon  3.072kB
	Step 1/7 : FROM golang:1.12.9-alpine3.10 as builder
	1.12.9-alpine3.10: Pulling from library/golang
	29bddadc8f3f: Pulling fs layer
	02bb20f2603b: Pulling fs layer
	b62863a3550b: Pulling fs layer
	112e0004bc16: Pulling fs layer
	308213b371bf: Pulling fs layer
	112e0004bc16: Waiting
	308213b371bf: Waiting
	b62863a3550b: Verifying Checksum
	b62863a3550b: Download complete
	02bb20f2603b: Verifying Checksum
	02bb20f2603b: Download complete
	29bddadc8f3f: Verifying Checksum
	29bddadc8f3f: Download complete
	308213b371bf: Verifying Checksum
	308213b371bf: Download complete
	29bddadc8f3f: Pull complete
	02bb20f2603b: Pull complete
	b62863a3550b: Pull complete
	112e0004bc16: Verifying Checksum
	112e0004bc16: Download complete
	112e0004bc16: Pull complete
	308213b371bf: Pull complete
	Digest: sha256:e0660b4f1e68e0d408420acb874b396fc6dd25e7c1d03ad36e7d6d1155a4dff6
	Status: Downloaded newer image for golang:1.12.9-alpine3.10
	 ---> ceb634c16195
	Step 2/7 : COPY app.go .
	 ---> bcaaf4bb2835
	Step 3/7 : RUN go build -o /app .
	 ---> Running in 900282536ec0
	# _/go
	/usr/local/go/pkg/tool/linux_arm64/link: running gcc failed: exec: "gcc": executable file not found in $PATH
	
	Building [leeroy-web]...

                                                
                                                
-- /stdout --
** stderr ** 
	unable to stream build output: The command '/bin/sh -c go build -o /app .' returned a non-zero code: 2. Please fix the Dockerfile and try again..

                                                
                                                
** /stderr **
skaffold_test.go:110: error running skaffold: exit status 1

                                                
                                                
-- stdout --
	Generating tags...
	 - leeroy-web -> leeroy-web:latest
	 - leeroy-app -> leeroy-app:latest
	Some taggers failed. Rerun with -vdebug for errors.
	Checking cache...
	 - leeroy-web: Not found. Building
	 - leeroy-app: Not found. Building
	Starting build...
	Found [skaffold-20210811011400-1387367] context, using local docker daemon.
	Building [leeroy-app]...
	Sending build context to Docker daemon  3.072kB
	Step 1/7 : FROM golang:1.12.9-alpine3.10 as builder
	1.12.9-alpine3.10: Pulling from library/golang
	29bddadc8f3f: Pulling fs layer
	02bb20f2603b: Pulling fs layer
	b62863a3550b: Pulling fs layer
	112e0004bc16: Pulling fs layer
	308213b371bf: Pulling fs layer
	112e0004bc16: Waiting
	308213b371bf: Waiting
	b62863a3550b: Verifying Checksum
	b62863a3550b: Download complete
	02bb20f2603b: Verifying Checksum
	02bb20f2603b: Download complete
	29bddadc8f3f: Verifying Checksum
	29bddadc8f3f: Download complete
	308213b371bf: Verifying Checksum
	308213b371bf: Download complete
	29bddadc8f3f: Pull complete
	02bb20f2603b: Pull complete
	b62863a3550b: Pull complete
	112e0004bc16: Verifying Checksum
	112e0004bc16: Download complete
	112e0004bc16: Pull complete
	308213b371bf: Pull complete
	Digest: sha256:e0660b4f1e68e0d408420acb874b396fc6dd25e7c1d03ad36e7d6d1155a4dff6
	Status: Downloaded newer image for golang:1.12.9-alpine3.10
	 ---> ceb634c16195
	Step 2/7 : COPY app.go .
	 ---> bcaaf4bb2835
	Step 3/7 : RUN go build -o /app .
	 ---> Running in 900282536ec0
	# _/go
	/usr/local/go/pkg/tool/linux_arm64/link: running gcc failed: exec: "gcc": executable file not found in $PATH
	
	Building [leeroy-web]...

                                                
                                                
-- /stdout --
** stderr ** 
	unable to stream build output: The command '/bin/sh -c go build -o /app .' returned a non-zero code: 2. Please fix the Dockerfile and try again..

                                                
                                                
** /stderr **
panic.go:613: *** TestSkaffold FAILED at 2021-08-11 01:15:02.110139414 +0000 UTC m=+2726.898374779
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestSkaffold]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect skaffold-20210811011400-1387367
helpers_test.go:236: (dbg) docker inspect skaffold-20210811011400-1387367:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "19d75f08100159ef11514cff66bd83e52093dc344e0e1f2c1763148734928412",
	        "Created": "2021-08-11T01:14:02.778192134Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1513794,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-11T01:14:03.30597831Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ba5ae658d5b3f017bdb597cc46a1912d5eed54239e31b777788d204fdcbc4445",
	        "ResolvConfPath": "/var/lib/docker/containers/19d75f08100159ef11514cff66bd83e52093dc344e0e1f2c1763148734928412/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/19d75f08100159ef11514cff66bd83e52093dc344e0e1f2c1763148734928412/hostname",
	        "HostsPath": "/var/lib/docker/containers/19d75f08100159ef11514cff66bd83e52093dc344e0e1f2c1763148734928412/hosts",
	        "LogPath": "/var/lib/docker/containers/19d75f08100159ef11514cff66bd83e52093dc344e0e1f2c1763148734928412/19d75f08100159ef11514cff66bd83e52093dc344e0e1f2c1763148734928412-json.log",
	        "Name": "/skaffold-20210811011400-1387367",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "skaffold-20210811011400-1387367:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "skaffold-20210811011400-1387367",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e7240d18d00d696fd82a111f99e305615770f8dcea2fdb3120f34e860f6a9889-init/diff:/var/lib/docker/overlay2/b901673749d4c23cf617379d66c43acbc184f898f580a05fca5568725e6ccb6a/diff:/var/lib/docker/overlay2/3fd19ee2c9d46b2cdb8a592d42d57d9efdba3a556c98f5018ae07caa15606bc4/diff:/var/lib/docker/overlay2/31f547e426e6dfa6ed65e0b7cb851c18e771f23a77868552685aacb2e126dc0a/diff:/var/lib/docker/overlay2/6ae53b304b800757235653c63c7879ae7f05b4d4f0400f7f6fadc53e2059aa5a/diff:/var/lib/docker/overlay2/7702d6ed068e8b454dd11af18cb8cb76986898926e3e3130c2d7f638062de9ee/diff:/var/lib/docker/overlay2/e67b0ce82f4d6c092698530106fa38495aa54b2fe5600ac022386a3d17165948/diff:/var/lib/docker/overlay2/d3ddbdbbe88f3c5a0867637eeb78a22790daa833a6179cdd4690044007911336/diff:/var/lib/docker/overlay2/10c48536a5187dfe63f1c090ec32daef76e852de7cc4a7e7f96a2fa1510314cc/diff:/var/lib/docker/overlay2/2186c26bc131feb045ca64a28e2cc431fed76b32afc3d3587916b98a9af807fe/diff:/var/lib/docker/overlay2/292c9d
aaf6d60ee235c7ac65bfc1b61b9c0d360ebbebcf08ba5efeb1b40de075/diff:/var/lib/docker/overlay2/9bc521e84afeeb62fa312e9eb2afc367bc449dbf66f412e17eb2338f79d6f920/diff:/var/lib/docker/overlay2/b1a93cf97438f068af56026fc52aaa329c46e4cac3d8f91c8d692871adaf451a/diff:/var/lib/docker/overlay2/b8e42d5d9e69e72a11e3cad660b9f29335dfc6cd1b4a6aebdbf5e6f313efe749/diff:/var/lib/docker/overlay2/6a6eaef3ce06d941ce606aaebc530878ce54d24a51c7947ca936a3a6eb4dac16/diff:/var/lib/docker/overlay2/62370bd2a6e35ce796647f79ccf9906147c91e8ceee31e401bdb7842371c6bee/diff:/var/lib/docker/overlay2/e673dacc1c6815100340b85af47aeb90eb5fca87778caec1d728de5b8cc9a36e/diff:/var/lib/docker/overlay2/bd17ea1d8cd8e2f88bd7fb4cee8a097365f6b81efc91f203a0504873fc0916a6/diff:/var/lib/docker/overlay2/d2f15007a2a5c037903647e5dd0d6882903fa163d23087bbd8eadeaf3618377b/diff:/var/lib/docker/overlay2/0bbc7fe1b1d62a2db9b4f402e6bc8781815951ae6df608307fd50a2fde242253/diff:/var/lib/docker/overlay2/d124fa0a0ea67ad0362eec0adf1f3e7cbd885b2cf4c31f83e917d97a09a791af/diff:/var/lib/d
ocker/overlay2/ee74e2f91490ecb544a95b306f1001046f3c4656413878d09be8bf67de7b4c4f/diff:/var/lib/docker/overlay2/4279b3790ea6aeb262c4ecd9cf4aae5beb1430f4fbb599b49ff27d0f7b3a9714/diff:/var/lib/docker/overlay2/b7fd6a0c88249dbf5e233463fbe08559ca287465617e7721977a002204ea3af5/diff:/var/lib/docker/overlay2/c495a83eeda1cf6df33d49341ee01f15738845e6330c0a5b3c29e11fdc4733b0/diff:/var/lib/docker/overlay2/ac747f0260d49943953568bbbe150f3a4f28d70bd82f40d0485ef13b12195044/diff:/var/lib/docker/overlay2/aa98d62ac831ecd60bc1acfa1708c0648c306bb7fa187026b472e9ae5c3364a4/diff:/var/lib/docker/overlay2/34829b132a53df856a1be03aa46565640e20cb075db18bd9775a5055fe0c0b22/diff:/var/lib/docker/overlay2/85a074fe6f79f3ea9d8b2f628355f41bb4f73b398257f8b6659bc171d86a0736/diff:/var/lib/docker/overlay2/c8c145d2e68e655880cd5c8fae8cb9f7cbd6b112f1f64fced224b17d4f60fbc7/diff:/var/lib/docker/overlay2/7480ad16aa2479be3569dd07eca685bc3a37a785e7ff281c448c7ca718cc67c3/diff:/var/lib/docker/overlay2/519f1304b1b8ee2daf8c1b9411f3e46d4fedacc8d6446937321372c4e8d
f2cb9/diff:/var/lib/docker/overlay2/246fcb20bef1dbfdc41186d1b7143566cd571a067830cc3f946b232024c2e85c/diff:/var/lib/docker/overlay2/f5f15e6d497abc56d9a2d901ed821a56e6f3effe2fc8d6c3ef64297faea15179/diff:/var/lib/docker/overlay2/3aa1fb1105e860c53ef63317f6757f9629a4a20f35764d976df2b0f0cee5d4f2/diff:/var/lib/docker/overlay2/765f7cba41acbb266d2cef89f2a76a5659b78c3b075223bf23257ac44acfe177/diff:/var/lib/docker/overlay2/53179410fe05d9ddea0a22ba2c123ca8e75f9c7839c2a64902e411e2bda2de23/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e7240d18d00d696fd82a111f99e305615770f8dcea2fdb3120f34e860f6a9889/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e7240d18d00d696fd82a111f99e305615770f8dcea2fdb3120f34e860f6a9889/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e7240d18d00d696fd82a111f99e305615770f8dcea2fdb3120f34e860f6a9889/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "skaffold-20210811011400-1387367",
	                "Source": "/var/lib/docker/volumes/skaffold-20210811011400-1387367/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "skaffold-20210811011400-1387367",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "skaffold-20210811011400-1387367",
	                "name.minikube.sigs.k8s.io": "skaffold-20210811011400-1387367",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7228e7d16c3fd1bba84fdfed0d0f620bf81e4f9f9a5eca6e4c8c3282e99db062",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50340"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50339"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50336"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50338"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50337"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/7228e7d16c3f",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "skaffold-20210811011400-1387367": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "19d75f081001",
	                        "skaffold-20210811011400-1387367"
	                    ],
	                    "NetworkID": "ae5fa68f177de82bde58810de83c098dbe7076231b8daee70b3e264388fa67b4",
	                    "EndpointID": "f14696d12c1bd737c18ff56a450ff6458e97847de016e923dc0fa3eccfd57b73",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p skaffold-20210811011400-1387367 -n skaffold-20210811011400-1387367
helpers_test.go:245: <<< TestSkaffold FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestSkaffold]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p skaffold-20210811011400-1387367 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p skaffold-20210811011400-1387367 logs -n 25: (1.252319101s)
helpers_test.go:253: TestSkaffold logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |------------|---------------------------------------------------------------|---------------------------------------|----------|---------|-------------------------------|-------------------------------|
	|  Command   |                             Args                              |                Profile                |   User   | Version |          Start Time           |           End Time            |
	|------------|---------------------------------------------------------------|---------------------------------------|----------|---------|-------------------------------|-------------------------------|
	| node       | add -p                                                        | multinode-20210811005307-1387367      | jenkins  | v1.22.0 | Wed, 11 Aug 2021 01:05:10 UTC | Wed, 11 Aug 2021 01:05:53 UTC |
	|            | multinode-20210811005307-1387367                              |                                       |          |         |                               |                               |
	|            | -v 3 --alsologtostderr                                        |                                       |          |         |                               |                               |
	| profile    | list --output json                                            | minikube                              | jenkins  | v1.22.0 | Wed, 11 Aug 2021 01:05:54 UTC | Wed, 11 Aug 2021 01:05:54 UTC |
	| -p         | multinode-20210811005307-1387367                              | multinode-20210811005307-1387367      | jenkins  | v1.22.0 | Wed, 11 Aug 2021 01:05:55 UTC | Wed, 11 Aug 2021 01:05:55 UTC |
	|            | cp testdata/cp-test.txt                                       |                                       |          |         |                               |                               |
	|            | /home/docker/cp-test.txt                                      |                                       |          |         |                               |                               |
	| -p         | multinode-20210811005307-1387367                              | multinode-20210811005307-1387367      | jenkins  | v1.22.0 | Wed, 11 Aug 2021 01:05:55 UTC | Wed, 11 Aug 2021 01:05:56 UTC |
	|            | ssh sudo cat                                                  |                                       |          |         |                               |                               |
	|            | /home/docker/cp-test.txt                                      |                                       |          |         |                               |                               |
	| -p         | multinode-20210811005307-1387367 cp testdata/cp-test.txt      | multinode-20210811005307-1387367      | jenkins  | v1.22.0 | Wed, 11 Aug 2021 01:05:56 UTC | Wed, 11 Aug 2021 01:05:56 UTC |
	|            | multinode-20210811005307-1387367-m02:/home/docker/cp-test.txt |                                       |          |         |                               |                               |
	| -p         | multinode-20210811005307-1387367                              | multinode-20210811005307-1387367      | jenkins  | v1.22.0 | Wed, 11 Aug 2021 01:05:56 UTC | Wed, 11 Aug 2021 01:05:56 UTC |
	|            | ssh -n                                                        |                                       |          |         |                               |                               |
	|            | multinode-20210811005307-1387367-m02                          |                                       |          |         |                               |                               |
	|            | sudo cat /home/docker/cp-test.txt                             |                                       |          |         |                               |                               |
	| -p         | multinode-20210811005307-1387367 cp testdata/cp-test.txt      | multinode-20210811005307-1387367      | jenkins  | v1.22.0 | Wed, 11 Aug 2021 01:05:56 UTC | Wed, 11 Aug 2021 01:05:57 UTC |
	|            | multinode-20210811005307-1387367-m03:/home/docker/cp-test.txt |                                       |          |         |                               |                               |
	| -p         | multinode-20210811005307-1387367                              | multinode-20210811005307-1387367      | jenkins  | v1.22.0 | Wed, 11 Aug 2021 01:05:57 UTC | Wed, 11 Aug 2021 01:05:57 UTC |
	|            | ssh -n                                                        |                                       |          |         |                               |                               |
	|            | multinode-20210811005307-1387367-m03                          |                                       |          |         |                               |                               |
	|            | sudo cat /home/docker/cp-test.txt                             |                                       |          |         |                               |                               |
	| -p         | multinode-20210811005307-1387367                              | multinode-20210811005307-1387367      | jenkins  | v1.22.0 | Wed, 11 Aug 2021 01:05:57 UTC | Wed, 11 Aug 2021 01:05:58 UTC |
	|            | node stop m03                                                 |                                       |          |         |                               |                               |
	| -p         | multinode-20210811005307-1387367                              | multinode-20210811005307-1387367      | jenkins  | v1.22.0 | Wed, 11 Aug 2021 01:05:59 UTC | Wed, 11 Aug 2021 01:06:24 UTC |
	|            | node start m03 --alsologtostderr                              |                                       |          |         |                               |                               |
	| stop       | -p                                                            | multinode-20210811005307-1387367      | jenkins  | v1.22.0 | Wed, 11 Aug 2021 01:06:25 UTC | Wed, 11 Aug 2021 01:06:38 UTC |
	|            | multinode-20210811005307-1387367                              |                                       |          |         |                               |                               |
	| start      | -p                                                            | multinode-20210811005307-1387367      | jenkins  | v1.22.0 | Wed, 11 Aug 2021 01:06:38 UTC | Wed, 11 Aug 2021 01:08:15 UTC |
	|            | multinode-20210811005307-1387367                              |                                       |          |         |                               |                               |
	|            | --wait=true -v=8                                              |                                       |          |         |                               |                               |
	|            | --alsologtostderr                                             |                                       |          |         |                               |                               |
	| -p         | multinode-20210811005307-1387367                              | multinode-20210811005307-1387367      | jenkins  | v1.22.0 | Wed, 11 Aug 2021 01:08:15 UTC | Wed, 11 Aug 2021 01:08:20 UTC |
	|            | node delete m03                                               |                                       |          |         |                               |                               |
	| -p         | multinode-20210811005307-1387367                              | multinode-20210811005307-1387367      | jenkins  | v1.22.0 | Wed, 11 Aug 2021 01:08:20 UTC | Wed, 11 Aug 2021 01:08:32 UTC |
	|            | stop                                                          |                                       |          |         |                               |                               |
	| start      | -p                                                            | multinode-20210811005307-1387367      | jenkins  | v1.22.0 | Wed, 11 Aug 2021 01:08:33 UTC | Wed, 11 Aug 2021 01:10:02 UTC |
	|            | multinode-20210811005307-1387367                              |                                       |          |         |                               |                               |
	|            | --wait=true -v=8                                              |                                       |          |         |                               |                               |
	|            | --alsologtostderr                                             |                                       |          |         |                               |                               |
	|            | --driver=docker                                               |                                       |          |         |                               |                               |
	|            | --container-runtime=docker                                    |                                       |          |         |                               |                               |
	| start      | -p                                                            | multinode-20210811005307-1387367-m03  | jenkins  | v1.22.0 | Wed, 11 Aug 2021 01:10:03 UTC | Wed, 11 Aug 2021 01:10:48 UTC |
	|            | multinode-20210811005307-1387367-m03                          |                                       |          |         |                               |                               |
	|            | --driver=docker                                               |                                       |          |         |                               |                               |
	|            | --container-runtime=docker                                    |                                       |          |         |                               |                               |
	| delete     | -p                                                            | multinode-20210811005307-1387367-m03  | jenkins  | v1.22.0 | Wed, 11 Aug 2021 01:10:48 UTC | Wed, 11 Aug 2021 01:10:51 UTC |
	|            | multinode-20210811005307-1387367-m03                          |                                       |          |         |                               |                               |
	| -p         | multinode-20210811005307-1387367                              | multinode-20210811005307-1387367      | jenkins  | v1.22.0 | Wed, 11 Aug 2021 01:10:51 UTC | Wed, 11 Aug 2021 01:10:53 UTC |
	|            | logs -n 25                                                    |                                       |          |         |                               |                               |
	| delete     | -p                                                            | multinode-20210811005307-1387367      | jenkins  | v1.22.0 | Wed, 11 Aug 2021 01:10:54 UTC | Wed, 11 Aug 2021 01:10:58 UTC |
	|            | multinode-20210811005307-1387367                              |                                       |          |         |                               |                               |
	| start      | -p                                                            | scheduled-stop-20210811011244-1387367 | jenkins  | v1.22.0 | Wed, 11 Aug 2021 01:12:44 UTC | Wed, 11 Aug 2021 01:13:27 UTC |
	|            | scheduled-stop-20210811011244-1387367                         |                                       |          |         |                               |                               |
	|            | --memory=2048 --driver=docker                                 |                                       |          |         |                               |                               |
	|            | --container-runtime=docker                                    |                                       |          |         |                               |                               |
	| stop       | -p                                                            | scheduled-stop-20210811011244-1387367 | jenkins  | v1.22.0 | Wed, 11 Aug 2021 01:13:28 UTC | Wed, 11 Aug 2021 01:13:28 UTC |
	|            | scheduled-stop-20210811011244-1387367                         |                                       |          |         |                               |                               |
	|            | --cancel-scheduled                                            |                                       |          |         |                               |                               |
	| stop       | -p                                                            | scheduled-stop-20210811011244-1387367 | jenkins  | v1.22.0 | Wed, 11 Aug 2021 01:13:41 UTC | Wed, 11 Aug 2021 01:13:57 UTC |
	|            | scheduled-stop-20210811011244-1387367                         |                                       |          |         |                               |                               |
	|            | --schedule 5s                                                 |                                       |          |         |                               |                               |
	| delete     | -p                                                            | scheduled-stop-20210811011244-1387367 | jenkins  | v1.22.0 | Wed, 11 Aug 2021 01:13:58 UTC | Wed, 11 Aug 2021 01:14:00 UTC |
	|            | scheduled-stop-20210811011244-1387367                         |                                       |          |         |                               |                               |
	| start      | -p                                                            | skaffold-20210811011400-1387367       | jenkins  | v1.22.0 | Wed, 11 Aug 2021 01:14:01 UTC | Wed, 11 Aug 2021 01:14:44 UTC |
	|            | skaffold-20210811011400-1387367                               |                                       |          |         |                               |                               |
	|            | --memory=2600 --driver=docker                                 |                                       |          |         |                               |                               |
	|            | --container-runtime=docker                                    |                                       |          |         |                               |                               |
	| docker-env | --shell none -p                                               | skaffold-20210811011400-1387367       | skaffold | v1.22.0 | Wed, 11 Aug 2021 01:14:44 UTC | Wed, 11 Aug 2021 01:14:45 UTC |
	|            | skaffold-20210811011400-1387367                               |                                       |          |         |                               |                               |
	|            | --user=skaffold                                               |                                       |          |         |                               |                               |
	|------------|---------------------------------------------------------------|---------------------------------------|----------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/11 01:14:01
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.16.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0811 01:14:01.330021 1513380 out.go:298] Setting OutFile to fd 1 ...
	I0811 01:14:01.330159 1513380 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0811 01:14:01.330162 1513380 out.go:311] Setting ErrFile to fd 2...
	I0811 01:14:01.330166 1513380 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0811 01:14:01.330296 1513380 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/bin
	I0811 01:14:01.330569 1513380 out.go:305] Setting JSON to false
	I0811 01:14:01.331405 1513380 start.go:111] hostinfo: {"hostname":"ip-172-31-31-251","uptime":39388,"bootTime":1628605053,"procs":250,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0811 01:14:01.331484 1513380 start.go:121] virtualization:  
	I0811 01:14:01.335211 1513380 out.go:177] * [skaffold-20210811011400-1387367] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	I0811 01:14:01.337834 1513380 out.go:177]   - MINIKUBE_LOCATION=12230
	I0811 01:14:01.336617 1513380 notify.go:169] Checking for updates...
	I0811 01:14:01.340351 1513380 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	I0811 01:14:01.342585 1513380 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube
	I0811 01:14:01.345178 1513380 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0811 01:14:01.345408 1513380 driver.go:335] Setting default libvirt URI to qemu:///system
	I0811 01:14:01.393193 1513380 docker.go:132] docker version: linux-20.10.8
	I0811 01:14:01.393281 1513380 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0811 01:14:01.493567 1513380 info.go:263] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:46 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:34 SystemTime:2021-08-11 01:14:01.430401161 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226263040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0811 01:14:01.493680 1513380 docker.go:244] overlay module found
	I0811 01:14:01.496136 1513380 out.go:177] * Using the docker driver based on user configuration
	I0811 01:14:01.496164 1513380 start.go:278] selected driver: docker
	I0811 01:14:01.496168 1513380 start.go:751] validating driver "docker" against <nil>
	I0811 01:14:01.496185 1513380 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0811 01:14:01.496227 1513380 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0811 01:14:01.496244 1513380 out.go:242] ! Your cgroup does not allow setting memory.
	I0811 01:14:01.498468 1513380 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0811 01:14:01.498828 1513380 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0811 01:14:01.584611 1513380 info.go:263] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:46 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:34 SystemTime:2021-08-11 01:14:01.528172781 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226263040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0811 01:14:01.584718 1513380 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0811 01:14:01.584911 1513380 start_flags.go:679] Wait components to verify : map[apiserver:true system_pods:true]
	I0811 01:14:01.584926 1513380 cni.go:93] Creating CNI manager for ""
	I0811 01:14:01.584932 1513380 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0811 01:14:01.584937 1513380 start_flags.go:277] config:
	{Name:skaffold-20210811011400-1387367 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2600 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:skaffold-20210811011400-1387367 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISo
cket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0811 01:14:01.587404 1513380 out.go:177] * Starting control plane node skaffold-20210811011400-1387367 in cluster skaffold-20210811011400-1387367
	I0811 01:14:01.587445 1513380 cache.go:117] Beginning downloading kic base image for docker with docker
	I0811 01:14:01.589558 1513380 out.go:177] * Pulling base image ...
	I0811 01:14:01.589605 1513380 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime docker
	I0811 01:14:01.589658 1513380 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-docker-overlay2-arm64.tar.lz4
	I0811 01:14:01.589666 1513380 cache.go:56] Caching tarball of preloaded images
	I0811 01:14:01.589848 1513380 preload.go:173] Found /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0811 01:14:01.589867 1513380 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on docker
	I0811 01:14:01.590190 1513380 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/skaffold-20210811011400-1387367/config.json ...
	I0811 01:14:01.590214 1513380 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/skaffold-20210811011400-1387367/config.json: {Name:mkbbfc489b58532142f4bdb980db90483208b0df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 01:14:01.590378 1513380 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	I0811 01:14:01.651585 1513380 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0811 01:14:01.651599 1513380 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0811 01:14:01.651619 1513380 cache.go:205] Successfully downloaded all kic artifacts
	I0811 01:14:01.651659 1513380 start.go:313] acquiring machines lock for skaffold-20210811011400-1387367: {Name:mkb3aa4a9fa268c14e2a416ec3859ebfba70d508 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0811 01:14:01.651807 1513380 start.go:317] acquired machines lock for "skaffold-20210811011400-1387367" in 132.003µs
	I0811 01:14:01.651839 1513380 start.go:89] Provisioning new machine with config: &{Name:skaffold-20210811011400-1387367 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2600 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:skaffold-20210811011400-1387367 Namespace:default APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0811 01:14:01.651925 1513380 start.go:126] createHost starting for "" (driver="docker")
	I0811 01:14:01.654265 1513380 out.go:204] * Creating docker container (CPUs=2, Memory=2600MB) ...
	I0811 01:14:01.654508 1513380 start.go:160] libmachine.API.Create for "skaffold-20210811011400-1387367" (driver="docker")
	I0811 01:14:01.654537 1513380 client.go:168] LocalClient.Create starting
	I0811 01:14:01.654606 1513380 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem
	I0811 01:14:01.654636 1513380 main.go:130] libmachine: Decoding PEM data...
	I0811 01:14:01.654652 1513380 main.go:130] libmachine: Parsing certificate...
	I0811 01:14:01.654768 1513380 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem
	I0811 01:14:01.654784 1513380 main.go:130] libmachine: Decoding PEM data...
	I0811 01:14:01.654797 1513380 main.go:130] libmachine: Parsing certificate...
	I0811 01:14:01.655191 1513380 cli_runner.go:115] Run: docker network inspect skaffold-20210811011400-1387367 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0811 01:14:01.687059 1513380 cli_runner.go:162] docker network inspect skaffold-20210811011400-1387367 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0811 01:14:01.687127 1513380 network_create.go:255] running [docker network inspect skaffold-20210811011400-1387367] to gather additional debugging logs...
	I0811 01:14:01.687145 1513380 cli_runner.go:115] Run: docker network inspect skaffold-20210811011400-1387367
	W0811 01:14:01.723321 1513380 cli_runner.go:162] docker network inspect skaffold-20210811011400-1387367 returned with exit code 1
	I0811 01:14:01.723341 1513380 network_create.go:258] error running [docker network inspect skaffold-20210811011400-1387367]: docker network inspect skaffold-20210811011400-1387367: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: skaffold-20210811011400-1387367
	I0811 01:14:01.723354 1513380 network_create.go:260] output of [docker network inspect skaffold-20210811011400-1387367]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: skaffold-20210811011400-1387367
	
	** /stderr **
	I0811 01:14:01.723420 1513380 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0811 01:14:01.755972 1513380 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0x400086e9c0] misses:0}
	I0811 01:14:01.756018 1513380 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0811 01:14:01.756035 1513380 network_create.go:106] attempt to create docker network skaffold-20210811011400-1387367 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0811 01:14:01.756088 1513380 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true skaffold-20210811011400-1387367
	I0811 01:14:01.827743 1513380 network_create.go:90] docker network skaffold-20210811011400-1387367 192.168.49.0/24 created
	I0811 01:14:01.827763 1513380 kic.go:106] calculated static IP "192.168.49.2" for the "skaffold-20210811011400-1387367" container
	I0811 01:14:01.827846 1513380 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0811 01:14:01.859273 1513380 cli_runner.go:115] Run: docker volume create skaffold-20210811011400-1387367 --label name.minikube.sigs.k8s.io=skaffold-20210811011400-1387367 --label created_by.minikube.sigs.k8s.io=true
	I0811 01:14:01.892179 1513380 oci.go:102] Successfully created a docker volume skaffold-20210811011400-1387367
	I0811 01:14:01.892267 1513380 cli_runner.go:115] Run: docker run --rm --name skaffold-20210811011400-1387367-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=skaffold-20210811011400-1387367 --entrypoint /usr/bin/test -v skaffold-20210811011400-1387367:/var gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -d /var/lib
	I0811 01:14:02.582232 1513380 oci.go:106] Successfully prepared a docker volume skaffold-20210811011400-1387367
	W0811 01:14:02.582278 1513380 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0811 01:14:02.582284 1513380 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0811 01:14:02.582290 1513380 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime docker
	I0811 01:14:02.582311 1513380 kic.go:179] Starting extracting preloaded images to volume ...
	I0811 01:14:02.582341 1513380 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0811 01:14:02.582377 1513380 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v skaffold-20210811011400-1387367:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir
	I0811 01:14:02.724990 1513380 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname skaffold-20210811011400-1387367 --name skaffold-20210811011400-1387367 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=skaffold-20210811011400-1387367 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=skaffold-20210811011400-1387367 --network skaffold-20210811011400-1387367 --ip 192.168.49.2 --volume skaffold-20210811011400-1387367:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79
	I0811 01:14:03.317233 1513380 cli_runner.go:115] Run: docker container inspect skaffold-20210811011400-1387367 --format={{.State.Running}}
	I0811 01:14:03.373216 1513380 cli_runner.go:115] Run: docker container inspect skaffold-20210811011400-1387367 --format={{.State.Status}}
	I0811 01:14:03.427087 1513380 cli_runner.go:115] Run: docker exec skaffold-20210811011400-1387367 stat /var/lib/dpkg/alternatives/iptables
	I0811 01:14:03.554476 1513380 oci.go:278] the created container "skaffold-20210811011400-1387367" has a running status.
	I0811 01:14:03.554496 1513380 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/skaffold-20210811011400-1387367/id_rsa...
	I0811 01:14:03.970298 1513380 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/skaffold-20210811011400-1387367/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0811 01:14:04.107333 1513380 cli_runner.go:115] Run: docker container inspect skaffold-20210811011400-1387367 --format={{.State.Status}}
	I0811 01:14:04.166464 1513380 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0811 01:14:04.166475 1513380 kic_runner.go:115] Args: [docker exec --privileged skaffold-20210811011400-1387367 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0811 01:14:12.647763 1513380 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v skaffold-20210811011400-1387367:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir: (10.065350886s)
	I0811 01:14:12.647780 1513380 kic.go:188] duration metric: took 10.065467 seconds to extract preloaded images to volume
	I0811 01:14:12.647866 1513380 cli_runner.go:115] Run: docker container inspect skaffold-20210811011400-1387367 --format={{.State.Status}}
	I0811 01:14:12.690646 1513380 machine.go:88] provisioning docker machine ...
	I0811 01:14:12.690670 1513380 ubuntu.go:169] provisioning hostname "skaffold-20210811011400-1387367"
	I0811 01:14:12.690732 1513380 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210811011400-1387367
	I0811 01:14:12.725653 1513380 main.go:130] libmachine: Using SSH client type: native
	I0811 01:14:12.725830 1513380 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil>  [] 0s} 127.0.0.1 50340 <nil> <nil>}
	I0811 01:14:12.725842 1513380 main.go:130] libmachine: About to run SSH command:
	sudo hostname skaffold-20210811011400-1387367 && echo "skaffold-20210811011400-1387367" | sudo tee /etc/hostname
	I0811 01:14:12.861867 1513380 main.go:130] libmachine: SSH cmd err, output: <nil>: skaffold-20210811011400-1387367
	
	I0811 01:14:12.861934 1513380 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210811011400-1387367
	I0811 01:14:12.898267 1513380 main.go:130] libmachine: Using SSH client type: native
	I0811 01:14:12.898428 1513380 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil>  [] 0s} 127.0.0.1 50340 <nil> <nil>}
	I0811 01:14:12.898448 1513380 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sskaffold-20210811011400-1387367' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 skaffold-20210811011400-1387367/g' /etc/hosts;
				else 
					echo '127.0.1.1 skaffold-20210811011400-1387367' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0811 01:14:13.024555 1513380 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0811 01:14:13.024572 1513380 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/k
ey.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube}
	I0811 01:14:13.024600 1513380 ubuntu.go:177] setting up certificates
	I0811 01:14:13.024608 1513380 provision.go:83] configureAuth start
	I0811 01:14:13.024665 1513380 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" skaffold-20210811011400-1387367
	I0811 01:14:13.062091 1513380 provision.go:137] copyHostCerts
	I0811 01:14:13.062145 1513380 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem, removing ...
	I0811 01:14:13.062152 1513380 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem
	I0811 01:14:13.062219 1513380 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem (1082 bytes)
	I0811 01:14:13.062296 1513380 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem, removing ...
	I0811 01:14:13.062301 1513380 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem
	I0811 01:14:13.062322 1513380 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem (1123 bytes)
	I0811 01:14:13.062388 1513380 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem, removing ...
	I0811 01:14:13.062391 1513380 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem
	I0811 01:14:13.062410 1513380 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem (1679 bytes)
	I0811 01:14:13.062447 1513380 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem org=jenkins.skaffold-20210811011400-1387367 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube skaffold-20210811011400-1387367]
	I0811 01:14:13.334025 1513380 provision.go:171] copyRemoteCerts
	I0811 01:14:13.334074 1513380 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0811 01:14:13.334114 1513380 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210811011400-1387367
	I0811 01:14:13.365663 1513380 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50340 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/skaffold-20210811011400-1387367/id_rsa Username:docker}
	I0811 01:14:13.447896 1513380 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0811 01:14:13.464302 1513380 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0811 01:14:13.480459 1513380 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0811 01:14:13.496918 1513380 provision.go:86] duration metric: configureAuth took 472.299097ms
	I0811 01:14:13.496933 1513380 ubuntu.go:193] setting minikube options for container-runtime
	I0811 01:14:13.497162 1513380 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210811011400-1387367
	I0811 01:14:13.529044 1513380 main.go:130] libmachine: Using SSH client type: native
	I0811 01:14:13.529212 1513380 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil>  [] 0s} 127.0.0.1 50340 <nil> <nil>}
	I0811 01:14:13.529224 1513380 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0811 01:14:13.645173 1513380 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0811 01:14:13.645185 1513380 ubuntu.go:71] root file system type: overlay
	I0811 01:14:13.645380 1513380 provision.go:308] Updating docker unit: /lib/systemd/system/docker.service ...
	I0811 01:14:13.645443 1513380 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210811011400-1387367
	I0811 01:14:13.678058 1513380 main.go:130] libmachine: Using SSH client type: native
	I0811 01:14:13.678218 1513380 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil>  [] 0s} 127.0.0.1 50340 <nil> <nil>}
	I0811 01:14:13.678319 1513380 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0811 01:14:13.805529 1513380 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0811 01:14:13.805601 1513380 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210811011400-1387367
	I0811 01:14:13.837930 1513380 main.go:130] libmachine: Using SSH client type: native
	I0811 01:14:13.838120 1513380 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil>  [] 0s} 127.0.0.1 50340 <nil> <nil>}
	I0811 01:14:13.838138 1513380 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0811 01:14:14.717402 1513380 main.go:130] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-06-02 11:55:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2021-08-11 01:14:13.800998844 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	+BindsTo=containerd.service
	 After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0811 01:14:14.717423 1513380 machine.go:91] provisioned docker machine in 2.026765081s
	I0811 01:14:14.717432 1513380 client.go:171] LocalClient.Create took 13.062890709s
	I0811 01:14:14.717448 1513380 start.go:168] duration metric: libmachine.API.Create for "skaffold-20210811011400-1387367" took 13.062943271s
	I0811 01:14:14.717461 1513380 start.go:267] post-start starting for "skaffold-20210811011400-1387367" (driver="docker")
	I0811 01:14:14.717466 1513380 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0811 01:14:14.717561 1513380 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0811 01:14:14.717603 1513380 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210811011400-1387367
	I0811 01:14:14.756793 1513380 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50340 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/skaffold-20210811011400-1387367/id_rsa Username:docker}
	I0811 01:14:14.840080 1513380 ssh_runner.go:149] Run: cat /etc/os-release
	I0811 01:14:14.842495 1513380 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0811 01:14:14.842510 1513380 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0811 01:14:14.842520 1513380 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0811 01:14:14.842526 1513380 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0811 01:14:14.842534 1513380 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/addons for local assets ...
	I0811 01:14:14.842582 1513380 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files for local assets ...
	I0811 01:14:14.842664 1513380 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/13873672.pem -> 13873672.pem in /etc/ssl/certs
	I0811 01:14:14.842764 1513380 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0811 01:14:14.848965 1513380 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/13873672.pem --> /etc/ssl/certs/13873672.pem (1708 bytes)
	I0811 01:14:14.865426 1513380 start.go:270] post-start completed in 147.952347ms
	I0811 01:14:14.865785 1513380 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" skaffold-20210811011400-1387367
	I0811 01:14:14.898885 1513380 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/skaffold-20210811011400-1387367/config.json ...
	I0811 01:14:14.899157 1513380 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0811 01:14:14.899202 1513380 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210811011400-1387367
	I0811 01:14:14.929598 1513380 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50340 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/skaffold-20210811011400-1387367/id_rsa Username:docker}
	I0811 01:14:15.008945 1513380 start.go:129] duration metric: createHost completed in 13.35700838s
	I0811 01:14:15.008959 1513380 start.go:80] releasing machines lock for "skaffold-20210811011400-1387367", held for 13.357144305s
	I0811 01:14:15.009062 1513380 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" skaffold-20210811011400-1387367
	I0811 01:14:15.039784 1513380 ssh_runner.go:149] Run: systemctl --version
	I0811 01:14:15.039829 1513380 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210811011400-1387367
	I0811 01:14:15.039850 1513380 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0811 01:14:15.039906 1513380 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210811011400-1387367
	I0811 01:14:15.075411 1513380 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50340 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/skaffold-20210811011400-1387367/id_rsa Username:docker}
	I0811 01:14:15.088944 1513380 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50340 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/skaffold-20210811011400-1387367/id_rsa Username:docker}
	I0811 01:14:15.156785 1513380 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0811 01:14:15.322719 1513380 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0811 01:14:15.332817 1513380 cruntime.go:249] skipping containerd shutdown because we are bound to it
	I0811 01:14:15.332872 1513380 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0811 01:14:15.342005 1513380 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0811 01:14:15.354029 1513380 ssh_runner.go:149] Run: sudo systemctl unmask docker.service
	I0811 01:14:15.443686 1513380 ssh_runner.go:149] Run: sudo systemctl enable docker.socket
	I0811 01:14:15.526148 1513380 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0811 01:14:15.535879 1513380 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0811 01:14:15.623742 1513380 ssh_runner.go:149] Run: sudo systemctl start docker
	I0811 01:14:15.633252 1513380 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0811 01:14:15.686042 1513380 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0811 01:14:15.741872 1513380 out.go:204] * Preparing Kubernetes v1.21.3 on Docker 20.10.7 ...
	I0811 01:14:15.741972 1513380 cli_runner.go:115] Run: docker network inspect skaffold-20210811011400-1387367 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0811 01:14:15.772355 1513380 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0811 01:14:15.775754 1513380 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0811 01:14:15.784847 1513380 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime docker
	I0811 01:14:15.784901 1513380 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0811 01:14:15.824665 1513380 docker.go:535] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.21.3
	k8s.gcr.io/kube-controller-manager:v1.21.3
	k8s.gcr.io/kube-proxy:v1.21.3
	k8s.gcr.io/kube-scheduler:v1.21.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.4.1
	kubernetesui/dashboard:v2.1.0
	k8s.gcr.io/coredns/coredns:v1.8.0
	k8s.gcr.io/etcd:3.4.13-0
	kubernetesui/metrics-scraper:v1.0.4
	
	-- /stdout --
	I0811 01:14:15.824679 1513380 docker.go:466] Images already preloaded, skipping extraction
	I0811 01:14:15.824734 1513380 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0811 01:14:15.863883 1513380 docker.go:535] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.21.3
	k8s.gcr.io/kube-controller-manager:v1.21.3
	k8s.gcr.io/kube-proxy:v1.21.3
	k8s.gcr.io/kube-scheduler:v1.21.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.4.1
	kubernetesui/dashboard:v2.1.0
	k8s.gcr.io/coredns/coredns:v1.8.0
	k8s.gcr.io/etcd:3.4.13-0
	kubernetesui/metrics-scraper:v1.0.4
	
	-- /stdout --
	I0811 01:14:15.863899 1513380 cache_images.go:74] Images are preloaded, skipping loading
	I0811 01:14:15.863957 1513380 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}}
	I0811 01:14:15.960438 1513380 cni.go:93] Creating CNI manager for ""
	I0811 01:14:15.960451 1513380 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0811 01:14:15.960461 1513380 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0811 01:14:15.960473 1513380 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:skaffold-20210811011400-1387367 NodeName:skaffold-20210811011400-1387367 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/
var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0811 01:14:15.960609 1513380 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "skaffold-20210811011400-1387367"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0811 01:14:15.960683 1513380 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=skaffold-20210811011400-1387367 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:skaffold-20210811011400-1387367 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0811 01:14:15.960744 1513380 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0811 01:14:15.967780 1513380 binaries.go:44] Found k8s binaries, skipping transfer
	I0811 01:14:15.967843 1513380 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0811 01:14:15.974394 1513380 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (357 bytes)
	I0811 01:14:15.987148 1513380 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0811 01:14:15.999763 1513380 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2074 bytes)
	I0811 01:14:16.012826 1513380 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0811 01:14:16.015778 1513380 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0811 01:14:16.024498 1513380 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/skaffold-20210811011400-1387367 for IP: 192.168.49.2
	I0811 01:14:16.024545 1513380 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key
	I0811 01:14:16.024560 1513380 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key
	I0811 01:14:16.024613 1513380 certs.go:294] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/skaffold-20210811011400-1387367/client.key
	I0811 01:14:16.024619 1513380 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/skaffold-20210811011400-1387367/client.crt with IP's: []
	I0811 01:14:17.492222 1513380 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/skaffold-20210811011400-1387367/client.crt ...
	I0811 01:14:17.492242 1513380 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/skaffold-20210811011400-1387367/client.crt: {Name:mk3cacd86dad70af5db1bb632b900d3e03fef834 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 01:14:17.492485 1513380 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/skaffold-20210811011400-1387367/client.key ...
	I0811 01:14:17.492495 1513380 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/skaffold-20210811011400-1387367/client.key: {Name:mk63c6806246b5a9c47f69c9072b355e26d59103 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 01:14:17.492595 1513380 certs.go:294] generating minikube signed cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/skaffold-20210811011400-1387367/apiserver.key.dd3b5fb2
	I0811 01:14:17.492601 1513380 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/skaffold-20210811011400-1387367/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0811 01:14:17.914548 1513380 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/skaffold-20210811011400-1387367/apiserver.crt.dd3b5fb2 ...
	I0811 01:14:17.914565 1513380 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/skaffold-20210811011400-1387367/apiserver.crt.dd3b5fb2: {Name:mk4b3f72f20c1c8ffd06e7f79a4676b29f27c90d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 01:14:17.914788 1513380 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/skaffold-20210811011400-1387367/apiserver.key.dd3b5fb2 ...
	I0811 01:14:17.914794 1513380 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/skaffold-20210811011400-1387367/apiserver.key.dd3b5fb2: {Name:mkbef0b8ba263722bef7720fead406f31e83a542 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 01:14:17.914887 1513380 certs.go:305] copying /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/skaffold-20210811011400-1387367/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/skaffold-20210811011400-1387367/apiserver.crt
	I0811 01:14:17.914944 1513380 certs.go:309] copying /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/skaffold-20210811011400-1387367/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/skaffold-20210811011400-1387367/apiserver.key
	I0811 01:14:17.914983 1513380 certs.go:294] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/skaffold-20210811011400-1387367/proxy-client.key
	I0811 01:14:17.914987 1513380 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/skaffold-20210811011400-1387367/proxy-client.crt with IP's: []
	I0811 01:14:18.385265 1513380 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/skaffold-20210811011400-1387367/proxy-client.crt ...
	I0811 01:14:18.385281 1513380 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/skaffold-20210811011400-1387367/proxy-client.crt: {Name:mk5ad70e11935885f9afed44c7c1334dd01c76b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 01:14:18.385499 1513380 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/skaffold-20210811011400-1387367/proxy-client.key ...
	I0811 01:14:18.385506 1513380 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/skaffold-20210811011400-1387367/proxy-client.key: {Name:mk2d5efc96ea78c07a7e0a0d0dc86f982ebce1af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 01:14:18.385690 1513380 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/1387367.pem (1338 bytes)
	W0811 01:14:18.385727 1513380 certs.go:369] ignoring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/1387367_empty.pem, impossibly tiny 0 bytes
	I0811 01:14:18.385735 1513380 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem (1675 bytes)
	I0811 01:14:18.385764 1513380 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem (1082 bytes)
	I0811 01:14:18.385784 1513380 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem (1123 bytes)
	I0811 01:14:18.385804 1513380 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem (1679 bytes)
	I0811 01:14:18.385851 1513380 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/13873672.pem (1708 bytes)
	I0811 01:14:18.387519 1513380 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/skaffold-20210811011400-1387367/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0811 01:14:18.405972 1513380 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/skaffold-20210811011400-1387367/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0811 01:14:18.423047 1513380 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/skaffold-20210811011400-1387367/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0811 01:14:18.440850 1513380 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/skaffold-20210811011400-1387367/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0811 01:14:18.458252 1513380 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0811 01:14:18.475756 1513380 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0811 01:14:18.492998 1513380 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0811 01:14:18.510384 1513380 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0811 01:14:18.527765 1513380 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0811 01:14:18.545648 1513380 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/1387367.pem --> /usr/share/ca-certificates/1387367.pem (1338 bytes)
	I0811 01:14:18.562919 1513380 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/13873672.pem --> /usr/share/ca-certificates/13873672.pem (1708 bytes)
	I0811 01:14:18.580363 1513380 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0811 01:14:18.593111 1513380 ssh_runner.go:149] Run: openssl version
	I0811 01:14:18.598030 1513380 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1387367.pem && ln -fs /usr/share/ca-certificates/1387367.pem /etc/ssl/certs/1387367.pem"
	I0811 01:14:18.605322 1513380 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1387367.pem
	I0811 01:14:18.608291 1513380 certs.go:416] hashing: -rw-r--r-- 1 root root 1338 Aug 11 00:46 /usr/share/ca-certificates/1387367.pem
	I0811 01:14:18.608335 1513380 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1387367.pem
	I0811 01:14:18.613127 1513380 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1387367.pem /etc/ssl/certs/51391683.0"
	I0811 01:14:18.619950 1513380 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13873672.pem && ln -fs /usr/share/ca-certificates/13873672.pem /etc/ssl/certs/13873672.pem"
	I0811 01:14:18.627268 1513380 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/13873672.pem
	I0811 01:14:18.630435 1513380 certs.go:416] hashing: -rw-r--r-- 1 root root 1708 Aug 11 00:46 /usr/share/ca-certificates/13873672.pem
	I0811 01:14:18.630486 1513380 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13873672.pem
	I0811 01:14:18.635471 1513380 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/13873672.pem /etc/ssl/certs/3ec20f2e.0"
	I0811 01:14:18.642815 1513380 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0811 01:14:18.650086 1513380 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0811 01:14:18.653273 1513380 certs.go:416] hashing: -rw-r--r-- 1 root root 1111 Aug 11 00:30 /usr/share/ca-certificates/minikubeCA.pem
	I0811 01:14:18.653320 1513380 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0811 01:14:18.658310 1513380 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0811 01:14:18.665275 1513380 kubeadm.go:390] StartCluster: {Name:skaffold-20210811011400-1387367 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2600 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:skaffold-20210811011400-1387367 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServer
IPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0811 01:14:18.665398 1513380 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0811 01:14:18.704474 1513380 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0811 01:14:18.712134 1513380 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0811 01:14:18.718915 1513380 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0811 01:14:18.718966 1513380 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0811 01:14:18.725652 1513380 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0811 01:14:18.725683 1513380 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0811 01:14:19.534074 1513380 out.go:204]   - Generating certificates and keys ...
	I0811 01:14:25.895561 1513380 out.go:204]   - Booting up control plane ...
	I0811 01:14:41.966232 1513380 out.go:204]   - Configuring RBAC rules ...
	I0811 01:14:42.386296 1513380 cni.go:93] Creating CNI manager for ""
	I0811 01:14:42.386309 1513380 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0811 01:14:42.386329 1513380 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0811 01:14:42.386429 1513380 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 01:14:42.386475 1513380 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=877a5691753f15214a0c269ac69dcdc5a4d99fcd minikube.k8s.io/name=skaffold-20210811011400-1387367 minikube.k8s.io/updated_at=2021_08_11T01_14_42_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 01:14:42.628186 1513380 kubeadm.go:985] duration metric: took 241.798981ms to wait for elevateKubeSystemPrivileges.
	I0811 01:14:42.628212 1513380 ops.go:34] apiserver oom_adj: -16
	I0811 01:14:42.893945 1513380 kubeadm.go:392] StartCluster complete in 24.228673532s
	I0811 01:14:42.893968 1513380 settings.go:142] acquiring lock: {Name:mk6e7f1e95cc0d18801bf31166529399345d1e40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 01:14:42.894052 1513380 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	I0811 01:14:42.895154 1513380 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig: {Name:mka174137207b71bb699e0c641682c96161f87c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 01:14:43.419417 1513380 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "skaffold-20210811011400-1387367" rescaled to 1
	I0811 01:14:43.419453 1513380 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0811 01:14:43.421677 1513380 out.go:177] * Verifying Kubernetes components...
	I0811 01:14:43.419500 1513380 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0811 01:14:43.421744 1513380 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0811 01:14:43.419748 1513380 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0811 01:14:43.421805 1513380 addons.go:59] Setting storage-provisioner=true in profile "skaffold-20210811011400-1387367"
	I0811 01:14:43.421825 1513380 addons.go:135] Setting addon storage-provisioner=true in "skaffold-20210811011400-1387367"
	W0811 01:14:43.421830 1513380 addons.go:147] addon storage-provisioner should already be in state true
	I0811 01:14:43.421839 1513380 addons.go:59] Setting default-storageclass=true in profile "skaffold-20210811011400-1387367"
	I0811 01:14:43.421851 1513380 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "skaffold-20210811011400-1387367"
	I0811 01:14:43.421854 1513380 host.go:66] Checking if "skaffold-20210811011400-1387367" exists ...
	I0811 01:14:43.422184 1513380 cli_runner.go:115] Run: docker container inspect skaffold-20210811011400-1387367 --format={{.State.Status}}
	I0811 01:14:43.422347 1513380 cli_runner.go:115] Run: docker container inspect skaffold-20210811011400-1387367 --format={{.State.Status}}
	I0811 01:14:43.491757 1513380 addons.go:135] Setting addon default-storageclass=true in "skaffold-20210811011400-1387367"
	W0811 01:14:43.491767 1513380 addons.go:147] addon default-storageclass should already be in state true
	I0811 01:14:43.491791 1513380 host.go:66] Checking if "skaffold-20210811011400-1387367" exists ...
	I0811 01:14:43.492281 1513380 cli_runner.go:115] Run: docker container inspect skaffold-20210811011400-1387367 --format={{.State.Status}}
	I0811 01:14:43.526016 1513380 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0811 01:14:43.526130 1513380 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0811 01:14:43.526138 1513380 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0811 01:14:43.526201 1513380 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210811011400-1387367
	I0811 01:14:43.558029 1513380 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0811 01:14:43.558040 1513380 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0811 01:14:43.558108 1513380 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210811011400-1387367
	I0811 01:14:43.608657 1513380 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50340 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/skaffold-20210811011400-1387367/id_rsa Username:docker}
	I0811 01:14:43.638322 1513380 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50340 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/skaffold-20210811011400-1387367/id_rsa Username:docker}
	I0811 01:14:43.662958 1513380 api_server.go:50] waiting for apiserver process to appear ...
	I0811 01:14:43.662998 1513380 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0811 01:14:43.663168 1513380 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0811 01:14:43.728564 1513380 api_server.go:70] duration metric: took 309.08619ms to wait for apiserver process to appear ...
	I0811 01:14:43.728580 1513380 api_server.go:86] waiting for apiserver healthz status ...
	I0811 01:14:43.728590 1513380 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0811 01:14:43.737836 1513380 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0811 01:14:43.738909 1513380 api_server.go:139] control plane version: v1.21.3
	I0811 01:14:43.738920 1513380 api_server.go:129] duration metric: took 10.335621ms to wait for apiserver health ...
	I0811 01:14:43.738926 1513380 system_pods.go:43] waiting for kube-system pods to appear ...
	I0811 01:14:43.749514 1513380 system_pods.go:59] 2 kube-system pods found
	I0811 01:14:43.749533 1513380 system_pods.go:61] "kube-controller-manager-skaffold-20210811011400-1387367" [130c77f4-cc4a-4546-ac34-98d099e86cbe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0811 01:14:43.749541 1513380 system_pods.go:61] "kube-scheduler-skaffold-20210811011400-1387367" [67915f6d-638d-4aa7-8a0a-ac070b46b807] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0811 01:14:43.749547 1513380 system_pods.go:74] duration metric: took 10.616145ms to wait for pod list to return data ...
	I0811 01:14:43.749554 1513380 kubeadm.go:547] duration metric: took 330.082936ms to wait for : map[apiserver:true system_pods:true] ...
	I0811 01:14:43.749565 1513380 node_conditions.go:102] verifying NodePressure condition ...
	I0811 01:14:43.778896 1513380 node_conditions.go:122] node storage ephemeral capacity is 60796312Ki
	I0811 01:14:43.778914 1513380 node_conditions.go:123] node cpu capacity is 2
	I0811 01:14:43.778924 1513380 node_conditions.go:105] duration metric: took 29.355181ms to run NodePressure ...
	I0811 01:14:43.778933 1513380 start.go:231] waiting for startup goroutines ...
	I0811 01:14:43.820068 1513380 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0811 01:14:43.861569 1513380 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0811 01:14:44.366290 1513380 start.go:736] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0811 01:14:44.437203 1513380 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0811 01:14:44.437230 1513380 addons.go:344] enableAddons completed in 1.017485739s
	I0811 01:14:44.490984 1513380 start.go:462] kubectl: 1.21.3, cluster: 1.21.3 (minor skew: 0)
	I0811 01:14:44.493747 1513380 out.go:177] * Done! kubectl is now configured to use "skaffold-20210811011400-1387367" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2021-08-11 01:14:04 UTC, end at Wed 2021-08-11 01:15:03 UTC. --
	Aug 11 01:14:14 skaffold-20210811011400-1387367 systemd[1]: docker.service: Succeeded.
	Aug 11 01:14:14 skaffold-20210811011400-1387367 systemd[1]: Stopped Docker Application Container Engine.
	Aug 11 01:14:14 skaffold-20210811011400-1387367 systemd[1]: Starting Docker Application Container Engine...
	Aug 11 01:14:14 skaffold-20210811011400-1387367 dockerd[455]: time="2021-08-11T01:14:14.357056861Z" level=info msg="Starting up"
	Aug 11 01:14:14 skaffold-20210811011400-1387367 dockerd[455]: time="2021-08-11T01:14:14.359108237Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Aug 11 01:14:14 skaffold-20210811011400-1387367 dockerd[455]: time="2021-08-11T01:14:14.359144880Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Aug 11 01:14:14 skaffold-20210811011400-1387367 dockerd[455]: time="2021-08-11T01:14:14.359168413Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Aug 11 01:14:14 skaffold-20210811011400-1387367 dockerd[455]: time="2021-08-11T01:14:14.359183420Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Aug 11 01:14:14 skaffold-20210811011400-1387367 dockerd[455]: time="2021-08-11T01:14:14.361926097Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Aug 11 01:14:14 skaffold-20210811011400-1387367 dockerd[455]: time="2021-08-11T01:14:14.361959377Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Aug 11 01:14:14 skaffold-20210811011400-1387367 dockerd[455]: time="2021-08-11T01:14:14.361986495Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Aug 11 01:14:14 skaffold-20210811011400-1387367 dockerd[455]: time="2021-08-11T01:14:14.362004661Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Aug 11 01:14:14 skaffold-20210811011400-1387367 dockerd[455]: time="2021-08-11T01:14:14.377721826Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Aug 11 01:14:14 skaffold-20210811011400-1387367 dockerd[455]: time="2021-08-11T01:14:14.383927701Z" level=warning msg="Your kernel does not support CPU realtime scheduler"
	Aug 11 01:14:14 skaffold-20210811011400-1387367 dockerd[455]: time="2021-08-11T01:14:14.383958290Z" level=warning msg="Your kernel does not support cgroup blkio weight"
	Aug 11 01:14:14 skaffold-20210811011400-1387367 dockerd[455]: time="2021-08-11T01:14:14.383966355Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
	Aug 11 01:14:14 skaffold-20210811011400-1387367 dockerd[455]: time="2021-08-11T01:14:14.384118330Z" level=info msg="Loading containers: start."
	Aug 11 01:14:14 skaffold-20210811011400-1387367 dockerd[455]: time="2021-08-11T01:14:14.585286281Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 11 01:14:14 skaffold-20210811011400-1387367 dockerd[455]: time="2021-08-11T01:14:14.667337758Z" level=info msg="Loading containers: done."
	Aug 11 01:14:14 skaffold-20210811011400-1387367 dockerd[455]: time="2021-08-11T01:14:14.691709053Z" level=info msg="Docker daemon" commit=b0f5bc3 graphdriver(s)=overlay2 version=20.10.7
	Aug 11 01:14:14 skaffold-20210811011400-1387367 dockerd[455]: time="2021-08-11T01:14:14.691804724Z" level=info msg="Daemon has completed initialization"
	Aug 11 01:14:14 skaffold-20210811011400-1387367 systemd[1]: Started Docker Application Container Engine.
	Aug 11 01:14:14 skaffold-20210811011400-1387367 dockerd[455]: time="2021-08-11T01:14:14.731425397Z" level=info msg="API listen on [::]:2376"
	Aug 11 01:14:14 skaffold-20210811011400-1387367 dockerd[455]: time="2021-08-11T01:14:14.747257130Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 11 01:15:01 skaffold-20210811011400-1387367 dockerd[455]: time="2021-08-11T01:15:01.982676967Z" level=info msg="ignoring event" container=900282536ec0ffa2931f321e21dc87680c582a97b0c6af73bcf39e7a31e7b939 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	008bfabe1a840       1a1f05a2cd7c2       2 seconds ago       Running             coredns                   0                   51d45bec3f7b6
	49dbb6e6a2f5a       4ea38350a1beb       3 seconds ago       Running             kube-proxy                0                   9f547017f3331
	ab46cdf91bf0e       05b738aa1bc63       32 seconds ago      Running             etcd                      0                   0d0e6ae528f1b
	e2667655fa9dc       44a6d50ef170d       32 seconds ago      Running             kube-apiserver            0                   5f67ebf9e4e64
	3c5eb66eaf48d       31a3b96cefc1e       32 seconds ago      Running             kube-scheduler            0                   737c6e851b5a3
	4ece9f1c08c49       cb310ff289d79       32 seconds ago      Running             kube-controller-manager   0                   110cd476bb4b9
	
	* 
	* ==> coredns [008bfabe1a84] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
	CoreDNS-1.8.0
	linux/arm64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* Name:               skaffold-20210811011400-1387367
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=skaffold-20210811011400-1387367
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=877a5691753f15214a0c269ac69dcdc5a4d99fcd
	                    minikube.k8s.io/name=skaffold-20210811011400-1387367
	                    minikube.k8s.io/updated_at=2021_08_11T01_14_42_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 11 Aug 2021 01:14:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  skaffold-20210811011400-1387367
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 11 Aug 2021 01:14:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 11 Aug 2021 01:14:54 +0000   Wed, 11 Aug 2021 01:14:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 11 Aug 2021 01:14:54 +0000   Wed, 11 Aug 2021 01:14:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 11 Aug 2021 01:14:54 +0000   Wed, 11 Aug 2021 01:14:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 11 Aug 2021 01:14:54 +0000   Wed, 11 Aug 2021 01:14:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    skaffold-20210811011400-1387367
	Capacity:
	  cpu:                2
	  ephemeral-storage:  60796312Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8033460Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  60796312Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8033460Ki
	  pods:               110
	System Info:
	  Machine ID:                 80c525a0c99c4bf099c0cbf9c365b032
	  System UUID:                e02c225b-bc49-4a2c-8513-893eca715eb6
	  Boot ID:                    dff2c102-a0cf-4fb0-a2ea-36617f3a3229
	  Kernel Version:             5.8.0-1041-aws
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.7
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-558bd4d5db-5qtwg                                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     8s
	  kube-system                 etcd-skaffold-20210811011400-1387367                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         17s
	  kube-system                 kube-apiserver-skaffold-20210811011400-1387367             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17s
	  kube-system                 kube-controller-manager-skaffold-20210811011400-1387367    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23s
	  kube-system                 kube-proxy-7xbbj                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 kube-scheduler-skaffold-20210811011400-1387367             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23s
	  kube-system                 storage-provisioner                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (2%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  NodeHasSufficientMemory  32s (x5 over 33s)  kubelet     Node skaffold-20210811011400-1387367 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    32s (x5 over 33s)  kubelet     Node skaffold-20210811011400-1387367 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     32s (x5 over 33s)  kubelet     Node skaffold-20210811011400-1387367 status is now: NodeHasSufficientPID
	  Normal  Starting                 18s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  17s                kubelet     Node skaffold-20210811011400-1387367 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17s                kubelet     Node skaffold-20210811011400-1387367 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17s                kubelet     Node skaffold-20210811011400-1387367 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             17s                kubelet     Node skaffold-20210811011400-1387367 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  17s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                9s                 kubelet     Node skaffold-20210811011400-1387367 status is now: NodeReady
	  Normal  Starting                 2s                 kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.001093] FS-Cache: O-key=[8] '38a8010000000000'
	[  +0.000822] FS-Cache: N-cookie c=00000000aef8ae5b [p=00000000d00a921d fl=2 nc=0 na=1]
	[  +0.001337] FS-Cache: N-cookie d=00000000d0f41ca1 n=00000000cf2b9e77
	[  +0.001079] FS-Cache: N-key=[8] '38a8010000000000'
	[  +0.008061] FS-Cache: Duplicate cookie detected
	[  +0.000824] FS-Cache: O-cookie c=000000009e8af87d [p=00000000d00a921d fl=226 nc=0 na=1]
	[  +0.001345] FS-Cache: O-cookie d=00000000d0f41ca1 n=00000000882d24dd
	[  +0.001078] FS-Cache: O-key=[8] '38a8010000000000'
	[  +0.000828] FS-Cache: N-cookie c=00000000aef8ae5b [p=00000000d00a921d fl=2 nc=0 na=1]
	[  +0.001344] FS-Cache: N-cookie d=00000000d0f41ca1 n=000000006ce4882d
	[  +0.001069] FS-Cache: N-key=[8] '38a8010000000000'
	[  +1.509820] FS-Cache: Duplicate cookie detected
	[  +0.000799] FS-Cache: O-cookie c=00000000e1eedaf3 [p=00000000d00a921d fl=226 nc=0 na=1]
	[  +0.001318] FS-Cache: O-cookie d=00000000d0f41ca1 n=0000000025fbee24
	[  +0.001053] FS-Cache: O-key=[8] '37a8010000000000'
	[  +0.000829] FS-Cache: N-cookie c=000000006f83a19d [p=00000000d00a921d fl=2 nc=0 na=1]
	[  +0.001316] FS-Cache: N-cookie d=00000000d0f41ca1 n=00000000d322ea0c
	[  +0.001048] FS-Cache: N-key=[8] '37a8010000000000'
	[  +0.277640] FS-Cache: Duplicate cookie detected
	[  +0.000818] FS-Cache: O-cookie c=000000007ae3c387 [p=00000000d00a921d fl=226 nc=0 na=1]
	[  +0.001327] FS-Cache: O-cookie d=00000000d0f41ca1 n=000000004bd4688e
	[  +0.001069] FS-Cache: O-key=[8] '3ca8010000000000'
	[  +0.000853] FS-Cache: N-cookie c=0000000007642642 [p=00000000d00a921d fl=2 nc=0 na=1]
	[  +0.001309] FS-Cache: N-cookie d=00000000d0f41ca1 n=00000000ae88504f
	[  +0.001071] FS-Cache: N-key=[8] '3ca8010000000000'
	
	* 
	* ==> etcd [ab46cdf91bf0] <==
	* raft2021/08/11 01:14:32 INFO: aec36adc501070cc became follower at term 1
	raft2021/08/11 01:14:32 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2021-08-11 01:14:32.328756 W | auth: simple token is not cryptographically signed
	2021-08-11 01:14:32.413676 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided]
	2021-08-11 01:14:32.418233 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2021/08/11 01:14:32 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2021-08-11 01:14:32.418670 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2021-08-11 01:14:32.427365 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-08-11 01:14:32.427509 I | embed: listening for peers on 192.168.49.2:2380
	2021-08-11 01:14:32.427650 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2021/08/11 01:14:33 INFO: aec36adc501070cc is starting a new election at term 1
	raft2021/08/11 01:14:33 INFO: aec36adc501070cc became candidate at term 2
	raft2021/08/11 01:14:33 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2021/08/11 01:14:33 INFO: aec36adc501070cc became leader at term 2
	raft2021/08/11 01:14:33 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2021-08-11 01:14:33.320980 I | etcdserver: setting up the initial cluster version to 3.4
	2021-08-11 01:14:33.330019 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-11 01:14:33.330066 I | etcdserver/api: enabled capabilities for version 3.4
	2021-08-11 01:14:33.330092 I | etcdserver: published {Name:skaffold-20210811011400-1387367 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2021-08-11 01:14:33.330097 I | embed: ready to serve client requests
	2021-08-11 01:14:33.331347 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-11 01:14:33.331506 I | embed: ready to serve client requests
	2021-08-11 01:14:33.465717 I | embed: serving client requests on 192.168.49.2:2379
	2021-08-11 01:14:53.930729 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-11 01:15:02.571726 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  01:15:03 up 10:57,  0 users,  load average: 2.25, 1.62, 1.38
	Linux skaffold-20210811011400-1387367 5.8.0-1041-aws #43~20.04.1-Ubuntu SMP Thu Jul 15 11:03:27 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [e2667655fa9d] <==
	* E0811 01:14:39.122746       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I0811 01:14:39.190152       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0811 01:14:39.190374       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0811 01:14:39.199574       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0811 01:14:39.202005       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0811 01:14:39.202083       1 apf_controller.go:299] Running API Priority and Fairness config worker
	I0811 01:14:39.202166       1 cache.go:39] Caches are synced for autoregister controller
	I0811 01:14:39.238854       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0811 01:14:39.340883       1 controller.go:611] quota admission added evaluator for: namespaces
	I0811 01:14:39.993831       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0811 01:14:39.993864       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0811 01:14:40.000091       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0811 01:14:40.003947       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0811 01:14:40.003966       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0811 01:14:40.450029       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0811 01:14:40.483495       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0811 01:14:40.580401       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0811 01:14:40.581631       1 controller.go:611] quota admission added evaluator for: endpoints
	I0811 01:14:40.590509       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0811 01:14:41.697358       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0811 01:14:42.271531       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0811 01:14:42.337392       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0811 01:14:46.028023       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0811 01:14:55.119428       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0811 01:14:55.266882       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [4ece9f1c08c4] <==
	* I0811 01:14:54.573429       1 shared_informer.go:247] Caches are synced for disruption 
	I0811 01:14:54.573439       1 disruption.go:371] Sending events to api server.
	I0811 01:14:54.581149       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0811 01:14:54.584390       1 shared_informer.go:247] Caches are synced for expand 
	I0811 01:14:54.587680       1 event.go:291] "Event occurred" object="kube-system/kube-controller-manager-skaffold-20210811011400-1387367" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0811 01:14:54.591355       1 event.go:291] "Event occurred" object="kube-system/etcd-skaffold-20210811011400-1387367" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0811 01:14:54.597858       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0811 01:14:54.607159       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	I0811 01:14:54.700153       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0811 01:14:54.747305       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0811 01:14:54.774128       1 shared_informer.go:247] Caches are synced for cronjob 
	I0811 01:14:54.775279       1 shared_informer.go:247] Caches are synced for resource quota 
	I0811 01:14:54.786465       1 shared_informer.go:247] Caches are synced for resource quota 
	I0811 01:14:54.795732       1 shared_informer.go:247] Caches are synced for bootstrap_signer 
	I0811 01:14:54.821868       1 shared_informer.go:247] Caches are synced for crt configmap 
	I0811 01:14:54.846032       1 shared_informer.go:247] Caches are synced for TTL after finished 
	I0811 01:14:54.856550       1 shared_informer.go:247] Caches are synced for job 
	I0811 01:14:55.121442       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-558bd4d5db to 1"
	I0811 01:14:55.235750       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0811 01:14:55.243615       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0811 01:14:55.243632       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0811 01:14:55.273382       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-7xbbj"
	E0811 01:14:55.290741       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"df3ce75a-8057-4ebc-98f4-5331692f0e3e", ResourceVersion:"291", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764241282, loc:(*time.Location)(0x6704c20)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x400181f140), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x400181f158)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.
LabelSelector)(0x40018bb2c0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Gl
usterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x40018017c0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x400181f
170), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeS
ource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x400181f188), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil),
AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.21.3", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x40018bb300)}}, Resources:v1.R
esourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4001865f20), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPo
licy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4001873438), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4000091260), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), Runtime
ClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x400186ff60)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4001873488)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I0811 01:14:55.571323       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-5qtwg"
	I0811 01:14:59.561639       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	
	* 
	* ==> kube-proxy [49dbb6e6a2f5] <==
	* I0811 01:15:00.990495       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0811 01:15:00.990574       1 server_others.go:140] Detected node IP 192.168.49.2
	W0811 01:15:00.990601       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0811 01:15:01.015395       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0811 01:15:01.015432       1 server_others.go:212] Using iptables Proxier.
	I0811 01:15:01.015443       1 server_others.go:219] creating dualStackProxier for iptables.
	W0811 01:15:01.015454       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0811 01:15:01.016363       1 server.go:643] Version: v1.21.3
	I0811 01:15:01.017337       1 config.go:315] Starting service config controller
	I0811 01:15:01.017347       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0811 01:15:01.017364       1 config.go:224] Starting endpoint slice config controller
	I0811 01:15:01.017468       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0811 01:15:01.020480       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0811 01:15:01.021840       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0811 01:15:01.117455       1 shared_informer.go:247] Caches are synced for service config 
	I0811 01:15:01.117521       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [3c5eb66eaf48] <==
	* W0811 01:14:39.135229       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0811 01:14:39.135265       1 authentication.go:337] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0811 01:14:39.135274       1 authentication.go:338] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0811 01:14:39.135307       1 authentication.go:339] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0811 01:14:39.233073       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0811 01:14:39.233407       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0811 01:14:39.234250       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0811 01:14:39.234442       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0811 01:14:39.286041       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0811 01:14:39.286348       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0811 01:14:39.286627       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0811 01:14:39.286865       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0811 01:14:39.286955       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0811 01:14:39.286875       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0811 01:14:39.287036       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0811 01:14:39.287350       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0811 01:14:39.287413       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0811 01:14:39.287469       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0811 01:14:39.287490       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0811 01:14:39.287545       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0811 01:14:39.293156       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0811 01:14:39.304846       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0811 01:14:40.162447       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0811 01:14:40.484306       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0811 01:14:43.634905       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2021-08-11 01:14:04 UTC, end at Wed 2021-08-11 01:15:03 UTC. --
	Aug 11 01:14:54 skaffold-20210811011400-1387367 kubelet[2331]: I0811 01:14:54.569156    2331 kuberuntime_manager.go:1044] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 11 01:14:54 skaffold-20210811011400-1387367 kubelet[2331]: I0811 01:14:54.569540    2331 docker_service.go:359] "Docker cri received runtime config" runtimeConfig="&RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Aug 11 01:14:54 skaffold-20210811011400-1387367 kubelet[2331]: I0811 01:14:54.569658    2331 kubelet_network.go:76] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 11 01:14:55 skaffold-20210811011400-1387367 kubelet[2331]: I0811 01:14:55.285331    2331 topology_manager.go:187] "Topology Admit Handler"
	Aug 11 01:14:55 skaffold-20210811011400-1387367 kubelet[2331]: E0811 01:14:55.297162    2331 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:skaffold-20210811011400-1387367" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'skaffold-20210811011400-1387367' and this object
	Aug 11 01:14:55 skaffold-20210811011400-1387367 kubelet[2331]: I0811 01:14:55.337287    2331 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7bcbb310-1be7-4378-b429-6b0263a750b5-xtables-lock\") pod \"kube-proxy-7xbbj\" (UID: \"7bcbb310-1be7-4378-b429-6b0263a750b5\") "
	Aug 11 01:14:55 skaffold-20210811011400-1387367 kubelet[2331]: I0811 01:14:55.337337    2331 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7bcbb310-1be7-4378-b429-6b0263a750b5-lib-modules\") pod \"kube-proxy-7xbbj\" (UID: \"7bcbb310-1be7-4378-b429-6b0263a750b5\") "
	Aug 11 01:14:55 skaffold-20210811011400-1387367 kubelet[2331]: I0811 01:14:55.337366    2331 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcf8w\" (UniqueName: \"kubernetes.io/projected/7bcbb310-1be7-4378-b429-6b0263a750b5-kube-api-access-wcf8w\") pod \"kube-proxy-7xbbj\" (UID: \"7bcbb310-1be7-4378-b429-6b0263a750b5\") "
	Aug 11 01:14:55 skaffold-20210811011400-1387367 kubelet[2331]: I0811 01:14:55.337423    2331 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7bcbb310-1be7-4378-b429-6b0263a750b5-kube-proxy\") pod \"kube-proxy-7xbbj\" (UID: \"7bcbb310-1be7-4378-b429-6b0263a750b5\") "
	Aug 11 01:14:55 skaffold-20210811011400-1387367 kubelet[2331]: I0811 01:14:55.588293    2331 topology_manager.go:187] "Topology Admit Handler"
	Aug 11 01:14:55 skaffold-20210811011400-1387367 kubelet[2331]: E0811 01:14:55.590821    2331 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:skaffold-20210811011400-1387367" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'skaffold-20210811011400-1387367' and this object
	Aug 11 01:14:55 skaffold-20210811011400-1387367 kubelet[2331]: I0811 01:14:55.640052    2331 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6mcv\" (UniqueName: \"kubernetes.io/projected/aec31525-7f38-4c69-80e4-76071c25d920-kube-api-access-m6mcv\") pod \"coredns-558bd4d5db-5qtwg\" (UID: \"aec31525-7f38-4c69-80e4-76071c25d920\") "
	Aug 11 01:14:55 skaffold-20210811011400-1387367 kubelet[2331]: I0811 01:14:55.640125    2331 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aec31525-7f38-4c69-80e4-76071c25d920-config-volume\") pod \"coredns-558bd4d5db-5qtwg\" (UID: \"aec31525-7f38-4c69-80e4-76071c25d920\") "
	Aug 11 01:14:56 skaffold-20210811011400-1387367 kubelet[2331]: E0811 01:14:56.454848    2331 projected.go:293] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Aug 11 01:14:56 skaffold-20210811011400-1387367 kubelet[2331]: E0811 01:14:56.454890    2331 projected.go:199] Error preparing data for projected volume kube-api-access-wcf8w for pod kube-system/kube-proxy-7xbbj: failed to sync configmap cache: timed out waiting for the condition
	Aug 11 01:14:56 skaffold-20210811011400-1387367 kubelet[2331]: E0811 01:14:56.454995    2331 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/projected/7bcbb310-1be7-4378-b429-6b0263a750b5-kube-api-access-wcf8w podName:7bcbb310-1be7-4378-b429-6b0263a750b5 nodeName:}" failed. No retries permitted until 2021-08-11 01:14:56.95495971 +0000 UTC m=+14.741886890 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"kube-api-access-wcf8w\" (UniqueName: \"kubernetes.io/projected/7bcbb310-1be7-4378-b429-6b0263a750b5-kube-api-access-wcf8w\") pod \"kube-proxy-7xbbj\" (UID: \"7bcbb310-1be7-4378-b429-6b0263a750b5\") : failed to sync configmap cache: timed out waiting for the condition"
	Aug 11 01:14:56 skaffold-20210811011400-1387367 kubelet[2331]: E0811 01:14:56.745123    2331 configmap.go:200] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
	Aug 11 01:14:56 skaffold-20210811011400-1387367 kubelet[2331]: E0811 01:14:56.745276    2331 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/configmap/aec31525-7f38-4c69-80e4-76071c25d920-config-volume podName:aec31525-7f38-4c69-80e4-76071c25d920 nodeName:}" failed. No retries permitted until 2021-08-11 01:14:57.245245227 +0000 UTC m=+15.032172407 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aec31525-7f38-4c69-80e4-76071c25d920-config-volume\") pod \"coredns-558bd4d5db-5qtwg\" (UID: \"aec31525-7f38-4c69-80e4-76071c25d920\") : failed to sync configmap cache: timed out waiting for the condition"
	Aug 11 01:15:01 skaffold-20210811011400-1387367 kubelet[2331]: I0811 01:15:01.064738    2331 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-558bd4d5db-5qtwg through plugin: invalid network status for"
	Aug 11 01:15:01 skaffold-20210811011400-1387367 kubelet[2331]: I0811 01:15:01.254835    2331 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-558bd4d5db-5qtwg through plugin: invalid network status for"
	Aug 11 01:15:02 skaffold-20210811011400-1387367 kubelet[2331]: I0811 01:15:02.310413    2331 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-558bd4d5db-5qtwg through plugin: invalid network status for"
	Aug 11 01:15:03 skaffold-20210811011400-1387367 kubelet[2331]: I0811 01:15:03.322221    2331 prober_manager.go:255] "Failed to trigger a manual run" probe="Readiness"
	Aug 11 01:15:03 skaffold-20210811011400-1387367 kubelet[2331]: I0811 01:15:03.357799    2331 topology_manager.go:187] "Topology Admit Handler"
	Aug 11 01:15:03 skaffold-20210811011400-1387367 kubelet[2331]: I0811 01:15:03.407012    2331 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/6249e2fa-352b-4901-a7ac-ce78c3febeca-tmp\") pod \"storage-provisioner\" (UID: \"6249e2fa-352b-4901-a7ac-ce78c3febeca\") "
	Aug 11 01:15:03 skaffold-20210811011400-1387367 kubelet[2331]: I0811 01:15:03.407067    2331 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzbfg\" (UniqueName: \"kubernetes.io/projected/6249e2fa-352b-4901-a7ac-ce78c3febeca-kube-api-access-qzbfg\") pod \"storage-provisioner\" (UID: \"6249e2fa-352b-4901-a7ac-ce78c3febeca\") "
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p skaffold-20210811011400-1387367 -n skaffold-20210811011400-1387367
helpers_test.go:262: (dbg) Run:  kubectl --context skaffold-20210811011400-1387367 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:268: non-running pods: 
helpers_test.go:270: ======> post-mortem[TestSkaffold]: describe non-running pods <======
helpers_test.go:273: (dbg) Run:  kubectl --context skaffold-20210811011400-1387367 describe pod 
helpers_test.go:273: (dbg) Non-zero exit: kubectl --context skaffold-20210811011400-1387367 describe pod : exit status 1 (95.86797ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:275: kubectl --context skaffold-20210811011400-1387367 describe pod : exit status 1
helpers_test.go:176: Cleaning up "skaffold-20210811011400-1387367" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p skaffold-20210811011400-1387367
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p skaffold-20210811011400-1387367: (2.561378471s)
--- FAIL: TestSkaffold (67.14s)

                                                
                                    
x
+
TestMissingContainerUpgrade (63.99s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:311: (dbg) Run:  /tmp/minikube-v1.9.1.757649450.exe start -p missing-upgrade-20210811012259-1387367 --memory=2200 --driver=docker  --container-runtime=docker
E0811 01:23:04.808738 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210811004603-1387367/client.crt: no such file or directory

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:311: (dbg) Non-zero exit: /tmp/minikube-v1.9.1.757649450.exe start -p missing-upgrade-20210811012259-1387367 --memory=2200 --driver=docker  --container-runtime=docker: exit status 70 (45.054438416s)

                                                
                                                
-- stdout --
	* [missing-upgrade-20210811012259-1387367] minikube v1.9.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=12230
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	* Using the docker driver based on user configuration
	* Starting control plane node m01 in cluster missing-upgrade-20210811012259-1387367
	* Pulling base image ...
	* Creating Kubernetes in docker container with (CPUs=2) (2 available), Memory=2200MB (7845MB available) ...
	* Deleting "missing-upgrade-20210811012259-1387367" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (2 available), Memory=2200MB (7845MB available) ...

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: create kic node: check container "missing-upgrade-20210811012259-1387367" running: temporary error created container "missing-upgrade-20210811012259-1387367" is not running yet
	* 
	X Failed to start docker container. "minikube start -p missing-upgrade-20210811012259-1387367" may fix it.: creating host: create: creating: create kic node: check container "missing-upgrade-20210811012259-1387367" running: temporary error created container "missing-upgrade-20210811012259-1387367" is not running yet
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:311: (dbg) Run:  /tmp/minikube-v1.9.1.757649450.exe start -p missing-upgrade-20210811012259-1387367 --memory=2200 --driver=docker  --container-runtime=docker
version_upgrade_test.go:311: (dbg) Non-zero exit: /tmp/minikube-v1.9.1.757649450.exe start -p missing-upgrade-20210811012259-1387367 --memory=2200 --driver=docker  --container-runtime=docker: exit status 70 (6.734133021s)

                                                
                                                
-- stdout --
	* [missing-upgrade-20210811012259-1387367] minikube v1.9.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=12230
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-20210811012259-1387367
	* Pulling base image ...
	* Restarting existing docker container for "missing-upgrade-20210811012259-1387367" ...
	* Restarting existing docker container for "missing-upgrade-20210811012259-1387367" ...

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: provision: get ssh host-port: get host-bind port 22 for "missing-upgrade-20210811012259-1387367", output 
	Template parsing error: template: :1:4: executing "" at <index (index .NetworkSettings.Ports "22/tcp") 0>: error calling index: index of untyped nil
	: exit status 1
	* 
	X Failed to start docker container. "minikube start -p missing-upgrade-20210811012259-1387367" may fix it.: provision: get ssh host-port: get host-bind port 22 for "missing-upgrade-20210811012259-1387367", output 
	Template parsing error: template: :1:4: executing "" at <index (index .NetworkSettings.Ports "22/tcp") 0>: error calling index: index of untyped nil
	: exit status 1
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:311: (dbg) Run:  /tmp/minikube-v1.9.1.757649450.exe start -p missing-upgrade-20210811012259-1387367 --memory=2200 --driver=docker  --container-runtime=docker
version_upgrade_test.go:311: (dbg) Non-zero exit: /tmp/minikube-v1.9.1.757649450.exe start -p missing-upgrade-20210811012259-1387367 --memory=2200 --driver=docker  --container-runtime=docker: exit status 70 (7.322949636s)

                                                
                                                
-- stdout --
	* [missing-upgrade-20210811012259-1387367] minikube v1.9.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=12230
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-20210811012259-1387367
	* Pulling base image ...
	* Restarting existing docker container for "missing-upgrade-20210811012259-1387367" ...
	* Restarting existing docker container for "missing-upgrade-20210811012259-1387367" ...

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: provision: get ssh host-port: get host-bind port 22 for "missing-upgrade-20210811012259-1387367", output 
	Template parsing error: template: :1:4: executing "" at <index (index .NetworkSettings.Ports "22/tcp") 0>: error calling index: index of untyped nil
	: exit status 1
	* 
	X Failed to start docker container. "minikube start -p missing-upgrade-20210811012259-1387367" may fix it.: provision: get ssh host-port: get host-bind port 22 for "missing-upgrade-20210811012259-1387367", output 
	Template parsing error: template: :1:4: executing "" at <index (index .NetworkSettings.Ports "22/tcp") 0>: error calling index: index of untyped nil
	: exit status 1
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:317: release start failed: exit status 70
panic.go:613: *** TestMissingContainerUpgrade FAILED at 2021-08-11 01:24:01.472596079 +0000 UTC m=+3266.260831444
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect missing-upgrade-20210811012259-1387367
helpers_test.go:236: (dbg) docker inspect missing-upgrade-20210811012259-1387367:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8cf55619011f12ed0b9b9a86372641b628e790e70e106a9b9867f3fb527b341b",
	        "Created": "2021-08-11T01:23:23.673716738Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "exited",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 1,
	            "Error": "",
	            "StartedAt": "2021-08-11T01:24:01.203491528Z",
	            "FinishedAt": "2021-08-11T01:24:01.202642016Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/8cf55619011f12ed0b9b9a86372641b628e790e70e106a9b9867f3fb527b341b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8cf55619011f12ed0b9b9a86372641b628e790e70e106a9b9867f3fb527b341b/hostname",
	        "HostsPath": "/var/lib/docker/containers/8cf55619011f12ed0b9b9a86372641b628e790e70e106a9b9867f3fb527b341b/hosts",
	        "LogPath": "/var/lib/docker/containers/8cf55619011f12ed0b9b9a86372641b628e790e70e106a9b9867f3fb527b341b/8cf55619011f12ed0b9b9a86372641b628e790e70e106a9b9867f3fb527b341b-json.log",
	        "Name": "/missing-upgrade-20210811012259-1387367",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "missing-upgrade-20210811012259-1387367:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/fd80b50b01f82b0121cdf597b6a942147812b51038970766c013c35f80e6d408-init/diff:/var/lib/docker/overlay2/22c056903417c58f9ad9d3b25b090e8df793c3141b19064dca0c311d11b8a4eb/diff:/var/lib/docker/overlay2/057f52d960f3c7aaa8ad14e0a4427d7671958af0d0828d6959319b14ead9153e/diff:/var/lib/docker/overlay2/4a6839109dba7fd7317ad3414f2882a93887ba80317129365e7acef207302f68/diff:/var/lib/docker/overlay2/408b9704ca58f064b9a9c5112017873a88724c94069baa72839a21625f707ad2/diff:/var/lib/docker/overlay2/c467279efb1dca130e5d3fe553f3267a7ebc509591160d4f1eb768104d90bc03/diff:/var/lib/docker/overlay2/37be36a40950ef4d7fb9e0242ca92b32faf114abe4c3e24e9ffc397f98dde834/diff:/var/lib/docker/overlay2/c62688324ff124fae0d68347dc128a06e8086c6e4c54888e7d6e952293223a3b/diff:/var/lib/docker/overlay2/bc3da3d252525a7afd328612a39a41e95ae8a23c25757f7ab194156013863db8/diff:/var/lib/docker/overlay2/3af348cedc770761c740b556d914b2a059324fb53d07a229cd490be1d79e25b2/diff:/var/lib/docker/overlay2/793b6a
8cf0d85c10c933a371e9ac6f5b727ff9bafda0eaaa19a4bf24ee9d6edc/diff:/var/lib/docker/overlay2/e4d738154ed55a2515815798ff3e37c9db8000141af78755639d2ff0655bc6de/diff:/var/lib/docker/overlay2/aea1be6ba3cf5d37ca1105bddf47eb31487e7f73f9d5a225e8fa405270f16d5e/diff:/var/lib/docker/overlay2/09577064e0f2c123b9587b7e54c62ae149fce1703e9d288acc6767b0a7d352cc/diff:/var/lib/docker/overlay2/02ebc1ed37b4729314159c117dc55ec27fde28bb58df366e95be1ec6a70cb8aa/diff:/var/lib/docker/overlay2/6ca19ada3b17dc95725710b33ee9f3ccd533ffa7d719bcb67e7baa036e2f11e4/diff:/var/lib/docker/overlay2/25f51166fc7691685d7913b35e02555b297845b42f6e2ff22df4d620a21de7f8/diff:/var/lib/docker/overlay2/6024fd41421e9af0d4af9d5cb561138287f2b124c6a1112bd55644f4fdb85d2d/diff:/var/lib/docker/overlay2/5926d856a7f697cc3898d74198ed445844a70dca213cbbc83a59648f2387f9e8/diff:/var/lib/docker/overlay2/1daaec396c2e6b7d66eca5695da42ac6bf924e254797f0d0e2d30a77f514374b/diff:/var/lib/docker/overlay2/0d7e6b603db01feb76b5ef74831ff7ff33ae9a0835d55dc58f2a9557504e65c8/diff:/var/lib/d
ocker/overlay2/e1f18c0251e04cf96b7eb378f51fc8c8f82983a59b969bf3fdaad81c51c117a5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fd80b50b01f82b0121cdf597b6a942147812b51038970766c013c35f80e6d408/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fd80b50b01f82b0121cdf597b6a942147812b51038970766c013c35f80e6d408/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fd80b50b01f82b0121cdf597b6a942147812b51038970766c013c35f80e6d408/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-20210811012259-1387367",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-20210811012259-1387367/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-20210811012259-1387367",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-20210811012259-1387367",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-20210811012259-1387367",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f71b341b63966e00ff6fa4b33f167f67388a46c85c4fad0edf6c8188797a2511",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {},
	            "SandboxKey": "/var/run/docker/netns/f71b341b6396",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "6869d7675aec689fca4fea8386593b977e372b14f340e711be86ae466d08a27d",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-20210811012259-1387367 -n missing-upgrade-20210811012259-1387367
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-20210811012259-1387367 -n missing-upgrade-20210811012259-1387367: exit status 7 (106.385725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "missing-upgrade-20210811012259-1387367" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:176: Cleaning up "missing-upgrade-20210811012259-1387367" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-20210811012259-1387367
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-20210811012259-1387367: (1.708495879s)
--- FAIL: TestMissingContainerUpgrade (63.99s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (807.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-20210811011523-1387367 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.14.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-20210811011523-1387367 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.14.0: exit status 109 (13m24.082734395s)

                                                
                                                
-- stdout --
	* [old-k8s-version-20210811011523-1387367] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=12230
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	* Kubernetes 1.21.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.21.3
	* Using the docker driver based on existing profile
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	* Starting control plane node old-k8s-version-20210811011523-1387367 in cluster old-k8s-version-20210811011523-1387367
	* Pulling base image ...
	* Restarting existing docker container for "old-k8s-version-20210811011523-1387367" ...
	* Preparing Kubernetes v1.14.0 on Docker 20.10.7 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	X Problems detected in kubelet:
	  Aug 11 01:36:52 old-k8s-version-20210811011523-1387367 kubelet[69013]: F0811 01:36:52.633715   69013 kubelet.go:1359] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level BestEffort QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods besteffort]: Failed to set config for supported subsystems : failed to write 4611686018427387904 to hugetlb.64kB.limit_in_bytes: open /sys/fs/cgroup/hugetlb/kubepods/besteffort/hugetlb.64kB.limit_in_bytes: permission denied
	  Aug 11 01:36:54 old-k8s-version-20210811011523-1387367 kubelet[69218]: F0811 01:36:54.420969   69218 kubelet.go:1359] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level Burstable QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods burstable]: Failed to set config for supported subsystems : failed to write 4611686018427387904 to hugetlb.64kB.limit_in_bytes: open /sys/fs/cgroup/hugetlb/kubepods/burstable/hugetlb.64kB.limit_in_bytes: permission denied
	  Aug 11 01:36:56 old-k8s-version-20210811011523-1387367 kubelet[69428]: F0811 01:36:56.035361   69428 kubelet.go:1359] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level Burstable QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods burstable]: Failed to set config for supported subsystems : failed to write 4611686018427387904 to hugetlb.64kB.limit_in_bytes: open /sys/fs/cgroup/hugetlb/kubepods/burstable/hugetlb.64kB.limit_in_bytes: permission denied
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0811 01:23:37.913419 1560222 out.go:298] Setting OutFile to fd 1 ...
	I0811 01:23:37.913567 1560222 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0811 01:23:37.913579 1560222 out.go:311] Setting ErrFile to fd 2...
	I0811 01:23:37.913606 1560222 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0811 01:23:37.913766 1560222 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/bin
	I0811 01:23:37.914017 1560222 out.go:305] Setting JSON to false
	I0811 01:23:37.914866 1560222 start.go:111] hostinfo: {"hostname":"ip-172-31-31-251","uptime":39965,"bootTime":1628605053,"procs":243,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0811 01:23:37.914954 1560222 start.go:121] virtualization:  
	I0811 01:23:37.918220 1560222 out.go:177] * [old-k8s-version-20210811011523-1387367] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	I0811 01:23:37.921172 1560222 out.go:177]   - MINIKUBE_LOCATION=12230
	I0811 01:23:37.919492 1560222 notify.go:169] Checking for updates...
	I0811 01:23:37.923315 1560222 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	I0811 01:23:37.925616 1560222 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube
	I0811 01:23:37.927985 1560222 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0811 01:23:37.931474 1560222 out.go:177] * Kubernetes 1.21.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.21.3
	I0811 01:23:37.931545 1560222 driver.go:335] Setting default libvirt URI to qemu:///system
	I0811 01:23:37.985137 1560222 docker.go:132] docker version: linux-20.10.8
	I0811 01:23:37.985286 1560222 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0811 01:23:38.077887 1560222 info.go:263] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:46 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:44 SystemTime:2021-08-11 01:23:38.019884801 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226263040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0811 01:23:38.078010 1560222 docker.go:244] overlay module found
	I0811 01:23:38.080414 1560222 out.go:177] * Using the docker driver based on existing profile
	I0811 01:23:38.080438 1560222 start.go:278] selected driver: docker
	I0811 01:23:38.080444 1560222 start.go:751] validating driver "docker" against &{Name:old-k8s-version-20210811011523-1387367 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:old-k8s-version-20210811011523-1387367 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeReques
ted:false ExtraDisks:0}
	I0811 01:23:38.080536 1560222 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0811 01:23:38.080580 1560222 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0811 01:23:38.080596 1560222 out.go:242] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0811 01:23:38.082470 1560222 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0811 01:23:38.082842 1560222 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0811 01:23:38.168795 1560222 info.go:263] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:46 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:44 SystemTime:2021-08-11 01:23:38.111102164 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226263040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	W0811 01:23:38.168935 1560222 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0811 01:23:38.168961 1560222 out.go:242] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0811 01:23:38.171081 1560222 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0811 01:23:38.171189 1560222 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0811 01:23:38.171215 1560222 cni.go:93] Creating CNI manager for ""
	I0811 01:23:38.171222 1560222 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0811 01:23:38.171240 1560222 start_flags.go:277] config:
	{Name:old-k8s-version-20210811011523-1387367 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:old-k8s-version-20210811011523-1387367 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0811 01:23:38.173674 1560222 out.go:177] * Starting control plane node old-k8s-version-20210811011523-1387367 in cluster old-k8s-version-20210811011523-1387367
	I0811 01:23:38.173707 1560222 cache.go:117] Beginning downloading kic base image for docker with docker
	I0811 01:23:38.175596 1560222 out.go:177] * Pulling base image ...
	I0811 01:23:38.175638 1560222 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime docker
	I0811 01:23:38.175694 1560222 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.14.0-docker-overlay2-arm64.tar.lz4
	I0811 01:23:38.175708 1560222 cache.go:56] Caching tarball of preloaded images
	I0811 01:23:38.175902 1560222 preload.go:173] Found /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.14.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0811 01:23:38.175929 1560222 cache.go:59] Finished verifying existence of preloaded tar for  v1.14.0 on docker
	I0811 01:23:38.176069 1560222 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210811011523-1387367/config.json ...
	I0811 01:23:38.176267 1560222 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	I0811 01:23:38.227260 1560222 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0811 01:23:38.227292 1560222 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0811 01:23:38.227306 1560222 cache.go:205] Successfully downloaded all kic artifacts
	I0811 01:23:38.227345 1560222 start.go:313] acquiring machines lock for old-k8s-version-20210811011523-1387367: {Name:mk372eb59e8858e52de63e8707116546762d7105 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0811 01:23:38.227478 1560222 start.go:317] acquired machines lock for "old-k8s-version-20210811011523-1387367" in 102.801µs
	I0811 01:23:38.227510 1560222 start.go:93] Skipping create...Using existing machine configuration
	I0811 01:23:38.227521 1560222 fix.go:55] fixHost starting: 
	I0811 01:23:38.227800 1560222 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210811011523-1387367 --format={{.State.Status}}
	I0811 01:23:38.260385 1560222 fix.go:108] recreateIfNeeded on old-k8s-version-20210811011523-1387367: state=Stopped err=<nil>
	W0811 01:23:38.260419 1560222 fix.go:134] unexpected machine state, will restart: <nil>
	I0811 01:23:38.263047 1560222 out.go:177] * Restarting existing docker container for "old-k8s-version-20210811011523-1387367" ...
	I0811 01:23:38.263124 1560222 cli_runner.go:115] Run: docker start old-k8s-version-20210811011523-1387367
	I0811 01:23:38.632806 1560222 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210811011523-1387367 --format={{.State.Status}}
	I0811 01:23:38.679029 1560222 kic.go:420] container "old-k8s-version-20210811011523-1387367" state is running.
	I0811 01:23:38.679421 1560222 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20210811011523-1387367
	I0811 01:23:38.718291 1560222 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210811011523-1387367/config.json ...
	I0811 01:23:38.718560 1560222 machine.go:88] provisioning docker machine ...
	I0811 01:23:38.718581 1560222 ubuntu.go:169] provisioning hostname "old-k8s-version-20210811011523-1387367"
	I0811 01:23:38.718688 1560222 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210811011523-1387367
	I0811 01:23:38.756976 1560222 main.go:130] libmachine: Using SSH client type: native
	I0811 01:23:38.757191 1560222 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil>  [] 0s} 127.0.0.1 50395 <nil> <nil>}
	I0811 01:23:38.757205 1560222 main.go:130] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-20210811011523-1387367 && echo "old-k8s-version-20210811011523-1387367" | sudo tee /etc/hostname
	I0811 01:23:38.758453 1560222 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0811 01:23:41.882088 1560222 main.go:130] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20210811011523-1387367
	
	I0811 01:23:41.882167 1560222 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210811011523-1387367
	I0811 01:23:41.914345 1560222 main.go:130] libmachine: Using SSH client type: native
	I0811 01:23:41.914530 1560222 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil>  [] 0s} 127.0.0.1 50395 <nil> <nil>}
	I0811 01:23:41.914568 1560222 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-20210811011523-1387367' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20210811011523-1387367/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-20210811011523-1387367' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0811 01:23:42.028671 1560222 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0811 01:23:42.028694 1560222 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/k
ey.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube}
	I0811 01:23:42.028729 1560222 ubuntu.go:177] setting up certificates
	I0811 01:23:42.028737 1560222 provision.go:83] configureAuth start
	I0811 01:23:42.028791 1560222 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20210811011523-1387367
	I0811 01:23:42.060257 1560222 provision.go:137] copyHostCerts
	I0811 01:23:42.060318 1560222 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem, removing ...
	I0811 01:23:42.060331 1560222 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem
	I0811 01:23:42.060398 1560222 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem (1082 bytes)
	I0811 01:23:42.060492 1560222 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem, removing ...
	I0811 01:23:42.060504 1560222 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem
	I0811 01:23:42.060526 1560222 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem (1123 bytes)
	I0811 01:23:42.060580 1560222 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem, removing ...
	I0811 01:23:42.060589 1560222 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem
	I0811 01:23:42.060609 1560222 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem (1679 bytes)
	I0811 01:23:42.060659 1560222 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-20210811011523-1387367 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20210811011523-1387367]
	I0811 01:23:42.428011 1560222 provision.go:171] copyRemoteCerts
	I0811 01:23:42.428098 1560222 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0811 01:23:42.428146 1560222 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210811011523-1387367
	I0811 01:23:42.468765 1560222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50395 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/old-k8s-version-20210811011523-1387367/id_rsa Username:docker}
	I0811 01:23:42.551761 1560222 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0811 01:23:42.568665 1560222 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem --> /etc/docker/server.pem (1285 bytes)
	I0811 01:23:42.584398 1560222 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0811 01:23:42.600147 1560222 provision.go:86] duration metric: configureAuth took 571.397892ms
	I0811 01:23:42.600205 1560222 ubuntu.go:193] setting minikube options for container-runtime
	I0811 01:23:42.600435 1560222 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210811011523-1387367
	I0811 01:23:42.631821 1560222 main.go:130] libmachine: Using SSH client type: native
	I0811 01:23:42.631988 1560222 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil>  [] 0s} 127.0.0.1 50395 <nil> <nil>}
	I0811 01:23:42.632003 1560222 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0811 01:23:42.745494 1560222 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0811 01:23:42.745514 1560222 ubuntu.go:71] root file system type: overlay
	I0811 01:23:42.745699 1560222 provision.go:308] Updating docker unit: /lib/systemd/system/docker.service ...
	I0811 01:23:42.745766 1560222 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210811011523-1387367
	I0811 01:23:42.777541 1560222 main.go:130] libmachine: Using SSH client type: native
	I0811 01:23:42.777732 1560222 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil>  [] 0s} 127.0.0.1 50395 <nil> <nil>}
	I0811 01:23:42.777832 1560222 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0811 01:23:42.900998 1560222 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0811 01:23:42.901095 1560222 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210811011523-1387367
	I0811 01:23:42.932466 1560222 main.go:130] libmachine: Using SSH client type: native
	I0811 01:23:42.932641 1560222 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil>  [] 0s} 127.0.0.1 50395 <nil> <nil>}
	I0811 01:23:42.932668 1560222 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0811 01:23:43.049326 1560222 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0811 01:23:43.049352 1560222 machine.go:91] provisioned docker machine in 4.330780427s
	I0811 01:23:43.049362 1560222 start.go:267] post-start starting for "old-k8s-version-20210811011523-1387367" (driver="docker")
	I0811 01:23:43.049368 1560222 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0811 01:23:43.049423 1560222 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0811 01:23:43.049474 1560222 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210811011523-1387367
	I0811 01:23:43.082501 1560222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50395 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/old-k8s-version-20210811011523-1387367/id_rsa Username:docker}
	I0811 01:23:43.168411 1560222 ssh_runner.go:149] Run: cat /etc/os-release
	I0811 01:23:43.170987 1560222 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0811 01:23:43.171011 1560222 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0811 01:23:43.171022 1560222 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0811 01:23:43.171029 1560222 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0811 01:23:43.171040 1560222 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/addons for local assets ...
	I0811 01:23:43.171092 1560222 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files for local assets ...
	I0811 01:23:43.171186 1560222 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/13873672.pem -> 13873672.pem in /etc/ssl/certs
	I0811 01:23:43.171664 1560222 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0811 01:23:43.182571 1560222 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/13873672.pem --> /etc/ssl/certs/13873672.pem (1708 bytes)
	I0811 01:23:43.202952 1560222 start.go:270] post-start completed in 153.575451ms
	I0811 01:23:43.203060 1560222 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0811 01:23:43.203125 1560222 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210811011523-1387367
	I0811 01:23:43.234547 1560222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50395 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/old-k8s-version-20210811011523-1387367/id_rsa Username:docker}
	I0811 01:23:43.317505 1560222 fix.go:57] fixHost completed within 5.089978239s
	I0811 01:23:43.317530 1560222 start.go:80] releasing machines lock for "old-k8s-version-20210811011523-1387367", held for 5.09003944s
	I0811 01:23:43.317611 1560222 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20210811011523-1387367
	I0811 01:23:43.349360 1560222 ssh_runner.go:149] Run: systemctl --version
	I0811 01:23:43.349412 1560222 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0811 01:23:43.349481 1560222 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210811011523-1387367
	I0811 01:23:43.349413 1560222 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210811011523-1387367
	I0811 01:23:43.390589 1560222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50395 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/old-k8s-version-20210811011523-1387367/id_rsa Username:docker}
	I0811 01:23:43.399337 1560222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50395 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/old-k8s-version-20210811011523-1387367/id_rsa Username:docker}
	I0811 01:23:43.693863 1560222 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0811 01:23:43.705465 1560222 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0811 01:23:43.715129 1560222 cruntime.go:249] skipping containerd shutdown because we are bound to it
	I0811 01:23:43.715195 1560222 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0811 01:23:43.725408 1560222 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0811 01:23:43.738106 1560222 ssh_runner.go:149] Run: sudo systemctl unmask docker.service
	I0811 01:23:43.827951 1560222 ssh_runner.go:149] Run: sudo systemctl enable docker.socket
	I0811 01:23:43.918878 1560222 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0811 01:23:43.928906 1560222 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0811 01:23:44.011866 1560222 ssh_runner.go:149] Run: sudo systemctl start docker
	I0811 01:23:44.021124 1560222 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0811 01:23:44.075734 1560222 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0811 01:23:44.130557 1560222 out.go:204] * Preparing Kubernetes v1.14.0 on Docker 20.10.7 ...
	I0811 01:23:44.130663 1560222 cli_runner.go:115] Run: docker network inspect old-k8s-version-20210811011523-1387367 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0811 01:23:44.161732 1560222 ssh_runner.go:149] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0811 01:23:44.164963 1560222 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0811 01:23:44.173633 1560222 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime docker
	I0811 01:23:44.173694 1560222 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0811 01:23:44.215044 1560222 docker.go:535] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	kubernetesui/dashboard:v2.1.0
	kubernetesui/metrics-scraper:v1.0.4
	k8s.gcr.io/kube-controller-manager:v1.14.0
	k8s.gcr.io/kube-scheduler:v1.14.0
	k8s.gcr.io/kube-apiserver:v1.14.0
	k8s.gcr.io/kube-proxy:v1.14.0
	k8s.gcr.io/coredns:1.3.1
	k8s.gcr.io/etcd:3.3.10
	busybox:1.28.4-glibc
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0811 01:23:44.215069 1560222 docker.go:466] Images already preloaded, skipping extraction
	I0811 01:23:44.215122 1560222 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0811 01:23:44.256025 1560222 docker.go:535] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	kubernetesui/dashboard:v2.1.0
	kubernetesui/metrics-scraper:v1.0.4
	k8s.gcr.io/kube-controller-manager:v1.14.0
	k8s.gcr.io/kube-apiserver:v1.14.0
	k8s.gcr.io/kube-scheduler:v1.14.0
	k8s.gcr.io/kube-proxy:v1.14.0
	k8s.gcr.io/coredns:1.3.1
	k8s.gcr.io/etcd:3.3.10
	busybox:1.28.4-glibc
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0811 01:23:44.256048 1560222 cache_images.go:74] Images are preloaded, skipping loading
	I0811 01:23:44.256110 1560222 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}}
	I0811 01:23:44.564837 1560222 cni.go:93] Creating CNI manager for ""
	I0811 01:23:44.564862 1560222 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0811 01:23:44.564872 1560222 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0811 01:23:44.564885 1560222 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.14.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20210811011523-1387367 NodeName:old-k8s-version-20210811011523-1387367 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs
ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0811 01:23:44.565038 1560222 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-20210811011523-1387367"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-20210811011523-1387367
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.58.2:2381
	kubernetesVersion: v1.14.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0811 01:23:44.565123 1560222 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.14.0/kubelet --allow-privileged=true --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --client-ca-file=/var/lib/minikube/certs/ca.crt --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-20210811011523-1387367 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.14.0 ClusterName:old-k8s-version-20210811011523-1387367 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0811 01:23:44.565194 1560222 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.14.0
	I0811 01:23:44.572704 1560222 binaries.go:44] Found k8s binaries, skipping transfer
	I0811 01:23:44.572766 1560222 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0811 01:23:44.579001 1560222 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (436 bytes)
	I0811 01:23:44.590904 1560222 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0811 01:23:44.602719 1560222 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0811 01:23:44.615149 1560222 ssh_runner.go:149] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0811 01:23:44.618007 1560222 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0811 01:23:44.626327 1560222 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210811011523-1387367 for IP: 192.168.58.2
	I0811 01:23:44.626381 1560222 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key
	I0811 01:23:44.626399 1560222 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key
	I0811 01:23:44.626450 1560222 certs.go:290] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210811011523-1387367/client.key
	I0811 01:23:44.626470 1560222 certs.go:290] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210811011523-1387367/apiserver.key.cee25041
	I0811 01:23:44.626485 1560222 certs.go:290] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210811011523-1387367/proxy-client.key
	I0811 01:23:44.626586 1560222 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/1387367.pem (1338 bytes)
	W0811 01:23:44.626625 1560222 certs.go:369] ignoring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/1387367_empty.pem, impossibly tiny 0 bytes
	I0811 01:23:44.626640 1560222 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem (1675 bytes)
	I0811 01:23:44.626665 1560222 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem (1082 bytes)
	I0811 01:23:44.626691 1560222 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem (1123 bytes)
	I0811 01:23:44.626721 1560222 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem (1679 bytes)
	I0811 01:23:44.626769 1560222 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/13873672.pem (1708 bytes)
	I0811 01:23:44.627863 1560222 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210811011523-1387367/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0811 01:23:44.644260 1560222 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210811011523-1387367/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0811 01:23:44.660910 1560222 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210811011523-1387367/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0811 01:23:44.678034 1560222 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210811011523-1387367/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0811 01:23:44.695064 1560222 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0811 01:23:44.711339 1560222 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0811 01:23:44.727473 1560222 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0811 01:23:44.743964 1560222 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0811 01:23:44.760875 1560222 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/1387367.pem --> /usr/share/ca-certificates/1387367.pem (1338 bytes)
	I0811 01:23:44.777612 1560222 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/13873672.pem --> /usr/share/ca-certificates/13873672.pem (1708 bytes)
	I0811 01:23:44.793435 1560222 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0811 01:23:44.809293 1560222 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0811 01:23:44.821564 1560222 ssh_runner.go:149] Run: openssl version
	I0811 01:23:44.827759 1560222 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1387367.pem && ln -fs /usr/share/ca-certificates/1387367.pem /etc/ssl/certs/1387367.pem"
	I0811 01:23:44.836679 1560222 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1387367.pem
	I0811 01:23:44.839813 1560222 certs.go:416] hashing: -rw-r--r-- 1 root root 1338 Aug 11 00:46 /usr/share/ca-certificates/1387367.pem
	I0811 01:23:44.839875 1560222 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1387367.pem
	I0811 01:23:44.844713 1560222 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1387367.pem /etc/ssl/certs/51391683.0"
	I0811 01:23:44.851235 1560222 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13873672.pem && ln -fs /usr/share/ca-certificates/13873672.pem /etc/ssl/certs/13873672.pem"
	I0811 01:23:44.858131 1560222 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/13873672.pem
	I0811 01:23:44.861184 1560222 certs.go:416] hashing: -rw-r--r-- 1 root root 1708 Aug 11 00:46 /usr/share/ca-certificates/13873672.pem
	I0811 01:23:44.861238 1560222 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13873672.pem
	I0811 01:23:44.866160 1560222 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/13873672.pem /etc/ssl/certs/3ec20f2e.0"
	I0811 01:23:44.872784 1560222 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0811 01:23:44.880211 1560222 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0811 01:23:44.883379 1560222 certs.go:416] hashing: -rw-r--r-- 1 root root 1111 Aug 11 00:30 /usr/share/ca-certificates/minikubeCA.pem
	I0811 01:23:44.883485 1560222 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0811 01:23:44.888578 1560222 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0811 01:23:44.895328 1560222 kubeadm.go:390] StartCluster: {Name:old-k8s-version-20210811011523-1387367 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:old-k8s-version-20210811011523-1387367 Namespace:default APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks
:0}
	I0811 01:23:44.895496 1560222 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0811 01:23:44.935039 1560222 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0811 01:23:44.941984 1560222 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0811 01:23:44.942040 1560222 kubeadm.go:600] restartCluster start
	I0811 01:23:44.942114 1560222 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0811 01:23:44.948210 1560222 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0811 01:23:44.948934 1560222 kubeconfig.go:117] verify returned: extract IP: "old-k8s-version-20210811011523-1387367" does not appear in /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	I0811 01:23:44.949049 1560222 kubeconfig.go:128] "old-k8s-version-20210811011523-1387367" context is missing from /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig - will repair!
	I0811 01:23:44.949425 1560222 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig: {Name:mka174137207b71bb699e0c641682c96161f87c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 01:23:44.951556 1560222 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0811 01:23:44.958759 1560222 api_server.go:164] Checking apiserver status ...
	I0811 01:23:44.958832 1560222 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 01:23:44.968721 1560222 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 01:23:45.169069 1560222 api_server.go:164] Checking apiserver status ...
	I0811 01:23:45.169158 1560222 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 01:23:45.179529 1560222 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 01:23:45.369795 1560222 api_server.go:164] Checking apiserver status ...
	I0811 01:23:45.369879 1560222 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 01:23:45.380142 1560222 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 01:23:45.569458 1560222 api_server.go:164] Checking apiserver status ...
	I0811 01:23:45.569581 1560222 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 01:23:45.580111 1560222 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 01:23:45.769414 1560222 api_server.go:164] Checking apiserver status ...
	I0811 01:23:45.769506 1560222 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 01:23:45.780149 1560222 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 01:23:45.969327 1560222 api_server.go:164] Checking apiserver status ...
	I0811 01:23:45.969409 1560222 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 01:23:45.979907 1560222 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 01:23:46.169099 1560222 api_server.go:164] Checking apiserver status ...
	I0811 01:23:46.169224 1560222 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 01:23:46.179715 1560222 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 01:23:46.368877 1560222 api_server.go:164] Checking apiserver status ...
	I0811 01:23:46.368987 1560222 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 01:23:46.379628 1560222 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 01:23:46.568855 1560222 api_server.go:164] Checking apiserver status ...
	I0811 01:23:46.568933 1560222 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 01:23:46.579548 1560222 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 01:23:46.769769 1560222 api_server.go:164] Checking apiserver status ...
	I0811 01:23:46.769846 1560222 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 01:23:46.782276 1560222 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 01:23:46.969371 1560222 api_server.go:164] Checking apiserver status ...
	I0811 01:23:46.969442 1560222 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 01:23:46.982641 1560222 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 01:23:47.168822 1560222 api_server.go:164] Checking apiserver status ...
	I0811 01:23:47.168894 1560222 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 01:23:47.182875 1560222 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 01:23:47.369206 1560222 api_server.go:164] Checking apiserver status ...
	I0811 01:23:47.369322 1560222 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 01:23:47.402054 1560222 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 01:23:47.569355 1560222 api_server.go:164] Checking apiserver status ...
	I0811 01:23:47.569430 1560222 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 01:23:47.580294 1560222 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 01:23:47.769558 1560222 api_server.go:164] Checking apiserver status ...
	I0811 01:23:47.769638 1560222 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 01:23:47.780021 1560222 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 01:23:47.969608 1560222 api_server.go:164] Checking apiserver status ...
	I0811 01:23:47.969691 1560222 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 01:23:47.979976 1560222 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 01:23:47.979998 1560222 api_server.go:164] Checking apiserver status ...
	I0811 01:23:47.980041 1560222 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 01:23:47.990095 1560222 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 01:23:47.990124 1560222 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0811 01:23:47.990131 1560222 kubeadm.go:1032] stopping kube-system containers ...
	I0811 01:23:47.990180 1560222 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0811 01:23:48.033798 1560222 docker.go:367] Stopping containers: [eb9326e47205 1db1de607f4e eaf17d74a13a 4633ba1ade18 f6076d0c7a6a 9951b6b47c71 58e805b6d5b0 296ccdb613e6 40abcf2e25aa e6880c8db38b 8744c75e76c1 a26c27c96df8 0903dca9c321 b9ea5e360196]
	I0811 01:23:48.033871 1560222 ssh_runner.go:149] Run: docker stop eb9326e47205 1db1de607f4e eaf17d74a13a 4633ba1ade18 f6076d0c7a6a 9951b6b47c71 58e805b6d5b0 296ccdb613e6 40abcf2e25aa e6880c8db38b 8744c75e76c1 a26c27c96df8 0903dca9c321 b9ea5e360196
	I0811 01:23:48.077386 1560222 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0811 01:23:48.088123 1560222 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0811 01:23:48.094803 1560222 kubeadm.go:154] found existing configuration files:
	-rw------- 1 root root 5755 Aug 11 01:15 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5791 Aug 11 01:15 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5951 Aug 11 01:15 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5743 Aug 11 01:16 /etc/kubernetes/scheduler.conf
	
	I0811 01:23:48.094864 1560222 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0811 01:23:48.101152 1560222 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0811 01:23:48.107301 1560222 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0811 01:23:48.113853 1560222 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0811 01:23:48.120856 1560222 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0811 01:23:48.127513 1560222 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0811 01:23:48.127541 1560222 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0811 01:23:48.677424 1560222 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0811 01:23:50.205360 1560222 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.527873512s)
	I0811 01:23:50.205390 1560222 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0811 01:23:50.446758 1560222 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0811 01:23:50.511095 1560222 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0811 01:23:50.576944 1560222 api_server.go:50] waiting for apiserver process to appear ...
	I0811 01:23:50.577004 1560222 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0811 01:23:51.091999 1560222 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0811 01:23:51.591679 1560222 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0811 01:23:52.092096 1560222 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0811 01:23:52.591528 1560222 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0811 01:23:52.622618 1560222 api_server.go:70] duration metric: took 2.045673392s to wait for apiserver process to appear ...
	I0811 01:23:52.622639 1560222 api_server.go:86] waiting for apiserver healthz status ...
	I0811 01:23:52.622650 1560222 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0811 01:23:57.623631 1560222 api_server.go:255] stopped: https://192.168.58.2:8443/healthz: Get "https://192.168.58.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0811 01:23:58.124499 1560222 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0811 01:24:03.125108 1560222 api_server.go:255] stopped: https://192.168.58.2:8443/healthz: Get "https://192.168.58.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0811 01:24:03.624113 1560222 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0811 01:24:03.697176 1560222 api_server.go:265] https://192.168.58.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0811 01:24:03.697199 1560222 api_server.go:101] status: https://192.168.58.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0811 01:24:04.125085 1560222 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0811 01:24:04.251830 1560222 api_server.go:265] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	healthz check failed
	W0811 01:24:04.251864 1560222 api_server.go:101] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	healthz check failed
	I0811 01:24:04.624430 1560222 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0811 01:24:04.677357 1560222 api_server.go:265] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	healthz check failed
	W0811 01:24:04.677385 1560222 api_server.go:101] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	healthz check failed
	I0811 01:24:05.125566 1560222 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0811 01:24:05.151184 1560222 api_server.go:265] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	healthz check failed
	W0811 01:24:05.151207 1560222 api_server.go:101] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	healthz check failed
	I0811 01:24:05.623754 1560222 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0811 01:24:05.640426 1560222 api_server.go:265] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	healthz check failed
	W0811 01:24:05.640451 1560222 api_server.go:101] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	healthz check failed
	I0811 01:24:06.123795 1560222 api_server.go:239] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0811 01:24:06.155285 1560222 api_server.go:265] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0811 01:24:06.190475 1560222 api_server.go:139] control plane version: v1.14.0
	I0811 01:24:06.190553 1560222 api_server.go:129] duration metric: took 13.567906784s to wait for apiserver health ...
	I0811 01:24:06.190576 1560222 cni.go:93] Creating CNI manager for ""
	I0811 01:24:06.190596 1560222 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0811 01:24:06.190627 1560222 system_pods.go:43] waiting for kube-system pods to appear ...
	I0811 01:24:06.206440 1560222 system_pods.go:59] 7 kube-system pods found
	I0811 01:24:06.206522 1560222 system_pods.go:61] "coredns-fb8b8dccf-nssnp" [c4306c39-fa41-11eb-8d58-0242c9ee8e97] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0811 01:24:06.206541 1560222 system_pods.go:61] "etcd-old-k8s-version-20210811011523-1387367" [da63bd1e-fa41-11eb-8d58-0242c9ee8e97] Running
	I0811 01:24:06.206559 1560222 system_pods.go:61] "kube-apiserver-old-k8s-version-20210811011523-1387367" [df28147d-fa41-11eb-8d58-0242c9ee8e97] Running
	I0811 01:24:06.206591 1560222 system_pods.go:61] "kube-controller-manager-old-k8s-version-20210811011523-1387367" [e18a645b-fa41-11eb-8d58-0242c9ee8e97] Running
	I0811 01:24:06.206615 1560222 system_pods.go:61] "kube-proxy-b2hgl" [c456fa3d-fa41-11eb-8d58-0242c9ee8e97] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0811 01:24:06.206632 1560222 system_pods.go:61] "kube-scheduler-old-k8s-version-20210811011523-1387367" [e485893a-fa41-11eb-8d58-0242c9ee8e97] Running
	I0811 01:24:06.206647 1560222 system_pods.go:61] "storage-provisioner" [c5c0f6bb-fa41-11eb-8d58-0242c9ee8e97] Running
	I0811 01:24:06.206661 1560222 system_pods.go:74] duration metric: took 16.014001ms to wait for pod list to return data ...
	I0811 01:24:06.206689 1560222 node_conditions.go:102] verifying NodePressure condition ...
	I0811 01:24:06.214985 1560222 node_conditions.go:122] node storage ephemeral capacity is 60796312Ki
	I0811 01:24:06.215017 1560222 node_conditions.go:123] node cpu capacity is 2
	I0811 01:24:06.215029 1560222 node_conditions.go:105] duration metric: took 8.320201ms to run NodePressure ...
	I0811 01:24:06.215046 1560222 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0811 01:24:08.059611 1560222 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (1.844547005s)
	I0811 01:24:08.059638 1560222 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0811 01:24:08.103030 1560222 retry.go:31] will retry after 276.165072ms: kubelet not initialised
	I0811 01:24:08.383683 1560222 retry.go:31] will retry after 540.190908ms: kubelet not initialised
	I0811 01:24:08.929889 1560222 retry.go:31] will retry after 655.06503ms: kubelet not initialised
	I0811 01:24:09.589452 1560222 retry.go:31] will retry after 791.196345ms: kubelet not initialised
	I0811 01:24:10.385933 1560222 retry.go:31] will retry after 1.170244332s: kubelet not initialised
	I0811 01:24:11.560594 1560222 retry.go:31] will retry after 2.253109428s: kubelet not initialised
	I0811 01:24:13.818220 1560222 retry.go:31] will retry after 1.610739793s: kubelet not initialised
	I0811 01:24:15.432958 1560222 retry.go:31] will retry after 2.804311738s: kubelet not initialised
	I0811 01:24:18.241529 1560222 retry.go:31] will retry after 3.824918958s: kubelet not initialised
	I0811 01:24:22.075318 1560222 retry.go:31] will retry after 7.69743562s: kubelet not initialised
	I0811 01:24:29.778040 1560222 retry.go:31] will retry after 14.635568968s: kubelet not initialised
	I0811 01:24:44.418098 1560222 kubeadm.go:746] kubelet initialised
	I0811 01:24:44.418118 1560222 kubeadm.go:747] duration metric: took 36.358473617s waiting for restarted kubelet to initialise ...
	I0811 01:24:44.418126 1560222 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0811 01:24:44.422799 1560222 pod_ready.go:78] waiting up to 4m0s for pod "coredns-fb8b8dccf-5d28q" in "kube-system" namespace to be "Ready" ...
	I0811 01:24:44.433464 1560222 pod_ready.go:92] pod "coredns-fb8b8dccf-5d28q" in "kube-system" namespace has status "Ready":"True"
	I0811 01:24:44.433490 1560222 pod_ready.go:81] duration metric: took 10.655037ms waiting for pod "coredns-fb8b8dccf-5d28q" in "kube-system" namespace to be "Ready" ...
	I0811 01:24:44.433502 1560222 pod_ready.go:78] waiting up to 4m0s for pod "coredns-fb8b8dccf-nssnp" in "kube-system" namespace to be "Ready" ...
	I0811 01:24:44.440119 1560222 pod_ready.go:92] pod "coredns-fb8b8dccf-nssnp" in "kube-system" namespace has status "Ready":"True"
	I0811 01:24:44.440141 1560222 pod_ready.go:81] duration metric: took 6.630743ms waiting for pod "coredns-fb8b8dccf-nssnp" in "kube-system" namespace to be "Ready" ...
	I0811 01:24:44.440151 1560222 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-20210811011523-1387367" in "kube-system" namespace to be "Ready" ...
	I0811 01:24:44.444293 1560222 pod_ready.go:92] pod "etcd-old-k8s-version-20210811011523-1387367" in "kube-system" namespace has status "Ready":"True"
	I0811 01:24:44.444310 1560222 pod_ready.go:81] duration metric: took 4.151654ms waiting for pod "etcd-old-k8s-version-20210811011523-1387367" in "kube-system" namespace to be "Ready" ...
	I0811 01:24:44.444320 1560222 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-20210811011523-1387367" in "kube-system" namespace to be "Ready" ...
	I0811 01:24:44.448737 1560222 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-20210811011523-1387367" in "kube-system" namespace has status "Ready":"True"
	I0811 01:24:44.448757 1560222 pod_ready.go:81] duration metric: took 4.428821ms waiting for pod "kube-apiserver-old-k8s-version-20210811011523-1387367" in "kube-system" namespace to be "Ready" ...
	I0811 01:24:44.448767 1560222 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-20210811011523-1387367" in "kube-system" namespace to be "Ready" ...
	I0811 01:24:44.818381 1560222 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-20210811011523-1387367" in "kube-system" namespace has status "Ready":"True"
	I0811 01:24:44.818435 1560222 pod_ready.go:81] duration metric: took 369.659005ms waiting for pod "kube-controller-manager-old-k8s-version-20210811011523-1387367" in "kube-system" namespace to be "Ready" ...
	I0811 01:24:44.818459 1560222 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-b2hgl" in "kube-system" namespace to be "Ready" ...
	I0811 01:24:45.217374 1560222 pod_ready.go:92] pod "kube-proxy-b2hgl" in "kube-system" namespace has status "Ready":"True"
	I0811 01:24:45.217395 1560222 pod_ready.go:81] duration metric: took 398.91946ms waiting for pod "kube-proxy-b2hgl" in "kube-system" namespace to be "Ready" ...
	I0811 01:24:45.217416 1560222 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-20210811011523-1387367" in "kube-system" namespace to be "Ready" ...
	I0811 01:24:45.616717 1560222 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-20210811011523-1387367" in "kube-system" namespace has status "Ready":"True"
	I0811 01:24:45.616745 1560222 pod_ready.go:81] duration metric: took 399.307083ms waiting for pod "kube-scheduler-old-k8s-version-20210811011523-1387367" in "kube-system" namespace to be "Ready" ...
	I0811 01:24:45.616758 1560222 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace to be "Ready" ...
	I0811 01:24:48.022791 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:24:50.521774 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:24:52.523317 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:24:55.022454 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:24:57.522808 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:25:00.022177 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:25:02.022753 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:25:04.031641 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:25:06.522632 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:25:09.022613 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:25:11.522856 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:25:14.023250 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:25:16.521725 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:25:18.523852 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:25:21.023137 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:25:23.025285 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:25:25.026376 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:25:27.523086 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:25:29.561261 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:25:31.655237 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:25:34.022062 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:25:36.023176 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:25:38.023423 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:25:40.028035 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:25:42.522369 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:25:44.522573 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:25:46.524147 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:25:49.022115 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:25:51.025669 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:25:53.522240 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:25:55.522683 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:25:57.522939 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:26:00.022022 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:26:02.023570 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:26:04.522805 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:26:06.522925 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:26:08.523954 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:26:11.022402 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:26:13.023694 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:26:15.523322 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:26:18.024583 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:26:20.523699 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:26:23.023686 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:26:25.025087 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:26:27.523339 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:26:29.523691 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:26:32.022198 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:26:34.022892 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:26:36.024768 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:26:38.523398 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:26:41.023542 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:26:43.023675 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:26:45.523957 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:26:47.524708 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:26:50.022431 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:26:52.022524 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:26:54.026454 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:26:56.030944 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:26:58.523338 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:27:01.022960 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:27:03.522283 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:27:05.523084 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:27:08.030258 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:27:10.522519 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:27:12.524971 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:27:15.023093 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:27:17.026763 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:27:19.523921 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:27:22.118699 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:27:24.123188 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:27:26.522647 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:27:28.523889 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:27:30.524106 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:27:33.022304 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:27:35.522503 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:27:37.523310 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:27:40.022582 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:27:42.022620 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:27:44.030228 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:27:46.523004 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:27:49.022676 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:27:51.023072 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:27:53.541837 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:27:55.542935 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:27:58.022664 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:28:00.023485 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:28:02.522149 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:28:04.523057 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:28:07.022524 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:28:09.522337 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:28:11.523161 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:28:14.029580 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:28:16.523459 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:28:18.525061 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:28:21.022846 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:28:23.524103 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:28:26.022529 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:28:28.023187 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:28:30.033036 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:28:32.523077 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:28:34.530722 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:28:37.024295 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:28:39.522475 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:28:41.523051 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:28:44.022913 1560222 pod_ready.go:102] pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace has status "Ready":"False"
	I0811 01:28:46.017085 1560222 pod_ready.go:81] duration metric: took 4m0.400311217s waiting for pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace to be "Ready" ...
	E0811 01:28:46.017138 1560222 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-8546d8b77b-64wt2" in "kube-system" namespace to be "Ready" (will not retry!)
	I0811 01:28:46.017171 1560222 pod_ready.go:38] duration metric: took 4m1.599019441s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0811 01:28:46.017216 1560222 kubeadm.go:604] restartCluster took 5m1.075159872s
	W0811 01:28:46.017387 1560222 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0811 01:28:46.017666 1560222 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0811 01:28:50.825047 1560222 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force": (4.80735702s)
	I0811 01:28:50.825110 1560222 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0811 01:28:50.839421 1560222 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0811 01:28:50.919146 1560222 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0811 01:28:50.931407 1560222 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0811 01:28:50.931523 1560222 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0811 01:28:50.939560 1560222 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0811 01:28:50.939618 1560222 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0811 01:28:52.098832 1560222 out.go:204]   - Generating certificates and keys ...
	I0811 01:28:56.405158 1560222 out.go:204]   - Booting up control plane ...
	W0811 01:32:56.430256 1560222 out.go:242] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.14.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.8.0-1041-aws
	DOCKER_VERSION: 20.10.7
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.8.0-1041-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.14.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.8.0-1041-aws
	DOCKER_VERSION: 20.10.7
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.8.0-1041-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	
	I0811 01:32:56.430349 1560222 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0811 01:32:56.581679 1560222 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0811 01:32:56.592491 1560222 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0811 01:32:56.633612 1560222 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0811 01:32:56.633683 1560222 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0811 01:32:56.641110 1560222 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0811 01:32:56.641174 1560222 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0811 01:32:57.515211 1560222 out.go:204]   - Generating certificates and keys ...
	I0811 01:33:00.213310 1560222 out.go:204]   - Booting up control plane ...
	I0811 01:37:00.249336 1560222 kubeadm.go:392] StartCluster complete in 13m15.354012512s
	I0811 01:37:00.249483 1560222 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0811 01:37:00.341747 1560222 logs.go:270] 0 containers: []
	W0811 01:37:00.341772 1560222 logs.go:272] No container was found matching "kube-apiserver"
	I0811 01:37:00.341821 1560222 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0811 01:37:00.397885 1560222 logs.go:270] 0 containers: []
	W0811 01:37:00.397920 1560222 logs.go:272] No container was found matching "etcd"
	I0811 01:37:00.397977 1560222 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0811 01:37:00.460187 1560222 logs.go:270] 0 containers: []
	W0811 01:37:00.460205 1560222 logs.go:272] No container was found matching "coredns"
	I0811 01:37:00.460259 1560222 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0811 01:37:00.504486 1560222 logs.go:270] 0 containers: []
	W0811 01:37:00.504508 1560222 logs.go:272] No container was found matching "kube-scheduler"
	I0811 01:37:00.504560 1560222 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0811 01:37:00.544873 1560222 logs.go:270] 0 containers: []
	W0811 01:37:00.544892 1560222 logs.go:272] No container was found matching "kube-proxy"
	I0811 01:37:00.544945 1560222 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0811 01:37:00.590947 1560222 logs.go:270] 0 containers: []
	W0811 01:37:00.590965 1560222 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0811 01:37:00.591022 1560222 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0811 01:37:00.645461 1560222 logs.go:270] 0 containers: []
	W0811 01:37:00.645486 1560222 logs.go:272] No container was found matching "storage-provisioner"
	I0811 01:37:00.645543 1560222 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0811 01:37:00.685049 1560222 logs.go:270] 0 containers: []
	W0811 01:37:00.685068 1560222 logs.go:272] No container was found matching "kube-controller-manager"
	I0811 01:37:00.685079 1560222 logs.go:123] Gathering logs for kubelet ...
	I0811 01:37:00.685090 1560222 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0811 01:37:00.707041 1560222 logs.go:138] Found kubelet problem: Aug 11 01:36:52 old-k8s-version-20210811011523-1387367 kubelet[69013]: F0811 01:36:52.633715   69013 kubelet.go:1359] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level BestEffort QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods besteffort]: Failed to set config for supported subsystems : failed to write 4611686018427387904 to hugetlb.64kB.limit_in_bytes: open /sys/fs/cgroup/hugetlb/kubepods/besteffort/hugetlb.64kB.limit_in_bytes: permission denied
	W0811 01:37:00.719352 1560222 logs.go:138] Found kubelet problem: Aug 11 01:36:54 old-k8s-version-20210811011523-1387367 kubelet[69218]: F0811 01:36:54.420969   69218 kubelet.go:1359] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level Burstable QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods burstable]: Failed to set config for supported subsystems : failed to write 4611686018427387904 to hugetlb.64kB.limit_in_bytes: open /sys/fs/cgroup/hugetlb/kubepods/burstable/hugetlb.64kB.limit_in_bytes: permission denied
	W0811 01:37:00.731093 1560222 logs.go:138] Found kubelet problem: Aug 11 01:36:56 old-k8s-version-20210811011523-1387367 kubelet[69428]: F0811 01:36:56.035361   69428 kubelet.go:1359] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level Burstable QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods burstable]: Failed to set config for supported subsystems : failed to write 4611686018427387904 to hugetlb.64kB.limit_in_bytes: open /sys/fs/cgroup/hugetlb/kubepods/burstable/hugetlb.64kB.limit_in_bytes: permission denied
	W0811 01:37:00.742824 1560222 logs.go:138] Found kubelet problem: Aug 11 01:36:57 old-k8s-version-20210811011523-1387367 kubelet[69625]: F0811 01:36:57.459475   69625 kubelet.go:1359] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level BestEffort QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods besteffort]: Failed to set config for supported subsystems : failed to write 4611686018427387904 to hugetlb.64kB.limit_in_bytes: open /sys/fs/cgroup/hugetlb/kubepods/besteffort/hugetlb.64kB.limit_in_bytes: permission denied
	W0811 01:37:00.754569 1560222 logs.go:138] Found kubelet problem: Aug 11 01:36:58 old-k8s-version-20210811011523-1387367 kubelet[69813]: F0811 01:36:58.939561   69813 kubelet.go:1359] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level Burstable QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods burstable]: Failed to set config for supported subsystems : failed to write 4611686018427387904 to hugetlb.64kB.limit_in_bytes: open /sys/fs/cgroup/hugetlb/kubepods/burstable/hugetlb.64kB.limit_in_bytes: permission denied
	W0811 01:37:00.766591 1560222 logs.go:138] Found kubelet problem: Aug 11 01:37:00 old-k8s-version-20210811011523-1387367 kubelet[70004]: F0811 01:37:00.470711   70004 kubelet.go:1359] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level Burstable QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods burstable]: Failed to set config for supported subsystems : failed to write 4611686018427387904 to hugetlb.64kB.limit_in_bytes: open /sys/fs/cgroup/hugetlb/kubepods/burstable/hugetlb.64kB.limit_in_bytes: permission denied
	I0811 01:37:00.766769 1560222 logs.go:123] Gathering logs for dmesg ...
	I0811 01:37:00.766787 1560222 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0811 01:37:00.783797 1560222 logs.go:123] Gathering logs for describe nodes ...
	I0811 01:37:00.783825 1560222 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0811 01:37:00.854847 1560222 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0811 01:37:00.854869 1560222 logs.go:123] Gathering logs for Docker ...
	I0811 01:37:00.854879 1560222 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0811 01:37:00.874412 1560222 logs.go:123] Gathering logs for container status ...
	I0811 01:37:00.874439 1560222 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0811 01:37:01.911961 1560222 ssh_runner.go:189] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (1.037503905s)
	W0811 01:37:01.912084 1560222 out.go:371] Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.14.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.8.0-1041-aws
	DOCKER_VERSION: 20.10.7
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.8.0-1041-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	W0811 01:37:01.912109 1560222 out.go:242] * 
	* 
	W0811 01:37:01.912272 1560222 out.go:242] X Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.14.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.8.0-1041-aws
	DOCKER_VERSION: 20.10.7
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.8.0-1041-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.14.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.8.0-1041-aws
	DOCKER_VERSION: 20.10.7
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.8.0-1041-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	
	W0811 01:37:01.912293 1560222 out.go:242] * 
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	W0811 01:37:01.914831 1560222 out.go:242] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                                              │
	│                                                                                                                                                            │
	│    * Please attach the following file to the GitHub issue:                                                                                                 │
	│    * - /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/logs/lastStart.txt    │
	│                                                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                                              │
	│                                                                                                                                                            │
	│    * Please attach the following file to the GitHub issue:                                                                                                 │
	│    * - /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/logs/lastStart.txt    │
	│                                                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0811 01:37:01.918160 1560222 out.go:177] X Problems detected in kubelet:
	I0811 01:37:01.920024 1560222 out.go:177]   Aug 11 01:36:52 old-k8s-version-20210811011523-1387367 kubelet[69013]: F0811 01:36:52.633715   69013 kubelet.go:1359] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level BestEffort QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods besteffort]: Failed to set config for supported subsystems : failed to write 4611686018427387904 to hugetlb.64kB.limit_in_bytes: open /sys/fs/cgroup/hugetlb/kubepods/besteffort/hugetlb.64kB.limit_in_bytes: permission denied
	I0811 01:37:01.922272 1560222 out.go:177]   Aug 11 01:36:54 old-k8s-version-20210811011523-1387367 kubelet[69218]: F0811 01:36:54.420969   69218 kubelet.go:1359] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level Burstable QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods burstable]: Failed to set config for supported subsystems : failed to write 4611686018427387904 to hugetlb.64kB.limit_in_bytes: open /sys/fs/cgroup/hugetlb/kubepods/burstable/hugetlb.64kB.limit_in_bytes: permission denied
	I0811 01:37:01.924783 1560222 out.go:177]   Aug 11 01:36:56 old-k8s-version-20210811011523-1387367 kubelet[69428]: F0811 01:36:56.035361   69428 kubelet.go:1359] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level Burstable QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods burstable]: Failed to set config for supported subsystems : failed to write 4611686018427387904 to hugetlb.64kB.limit_in_bytes: open /sys/fs/cgroup/hugetlb/kubepods/burstable/hugetlb.64kB.limit_in_bytes: permission denied
	I0811 01:37:01.929160 1560222 out.go:177] 
	W0811 01:37:01.929423 1560222 out.go:242] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.14.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.8.0-1041-aws
	DOCKER_VERSION: 20.10.7
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.8.0-1041-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.14.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.8.0-1041-aws
	DOCKER_VERSION: 20.10.7
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.8.0-1041-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	
	W0811 01:37:01.929871 1560222 out.go:242] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0811 01:37:01.929965 1560222 out.go:242] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0811 01:37:01.932910 1560222 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:232: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-20210811011523-1387367 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.14.0": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect old-k8s-version-20210811011523-1387367
helpers_test.go:236: (dbg) docker inspect old-k8s-version-20210811011523-1387367:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bba6c5a68bfe92f88e2496a453e876e1b53a0cf07478dc7bff0624ff71022063",
	        "Created": "2021-08-11T01:15:25.302959325Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1560408,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-11T01:23:38.623423696Z",
	            "FinishedAt": "2021-08-11T01:23:37.413617456Z"
	        },
	        "Image": "sha256:ba5ae658d5b3f017bdb597cc46a1912d5eed54239e31b777788d204fdcbc4445",
	        "ResolvConfPath": "/var/lib/docker/containers/bba6c5a68bfe92f88e2496a453e876e1b53a0cf07478dc7bff0624ff71022063/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bba6c5a68bfe92f88e2496a453e876e1b53a0cf07478dc7bff0624ff71022063/hostname",
	        "HostsPath": "/var/lib/docker/containers/bba6c5a68bfe92f88e2496a453e876e1b53a0cf07478dc7bff0624ff71022063/hosts",
	        "LogPath": "/var/lib/docker/containers/bba6c5a68bfe92f88e2496a453e876e1b53a0cf07478dc7bff0624ff71022063/bba6c5a68bfe92f88e2496a453e876e1b53a0cf07478dc7bff0624ff71022063-json.log",
	        "Name": "/old-k8s-version-20210811011523-1387367",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20210811011523-1387367:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20210811011523-1387367",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/6c8f21f19a48e487593849b1d06a20a9bf9142e37e95ec4707bc3765f7e049ef-init/diff:/var/lib/docker/overlay2/b901673749d4c23cf617379d66c43acbc184f898f580a05fca5568725e6ccb6a/diff:/var/lib/docker/overlay2/3fd19ee2c9d46b2cdb8a592d42d57d9efdba3a556c98f5018ae07caa15606bc4/diff:/var/lib/docker/overlay2/31f547e426e6dfa6ed65e0b7cb851c18e771f23a77868552685aacb2e126dc0a/diff:/var/lib/docker/overlay2/6ae53b304b800757235653c63c7879ae7f05b4d4f0400f7f6fadc53e2059aa5a/diff:/var/lib/docker/overlay2/7702d6ed068e8b454dd11af18cb8cb76986898926e3e3130c2d7f638062de9ee/diff:/var/lib/docker/overlay2/e67b0ce82f4d6c092698530106fa38495aa54b2fe5600ac022386a3d17165948/diff:/var/lib/docker/overlay2/d3ddbdbbe88f3c5a0867637eeb78a22790daa833a6179cdd4690044007911336/diff:/var/lib/docker/overlay2/10c48536a5187dfe63f1c090ec32daef76e852de7cc4a7e7f96a2fa1510314cc/diff:/var/lib/docker/overlay2/2186c26bc131feb045ca64a28e2cc431fed76b32afc3d3587916b98a9af807fe/diff:/var/lib/docker/overlay2/292c9d
aaf6d60ee235c7ac65bfc1b61b9c0d360ebbebcf08ba5efeb1b40de075/diff:/var/lib/docker/overlay2/9bc521e84afeeb62fa312e9eb2afc367bc449dbf66f412e17eb2338f79d6f920/diff:/var/lib/docker/overlay2/b1a93cf97438f068af56026fc52aaa329c46e4cac3d8f91c8d692871adaf451a/diff:/var/lib/docker/overlay2/b8e42d5d9e69e72a11e3cad660b9f29335dfc6cd1b4a6aebdbf5e6f313efe749/diff:/var/lib/docker/overlay2/6a6eaef3ce06d941ce606aaebc530878ce54d24a51c7947ca936a3a6eb4dac16/diff:/var/lib/docker/overlay2/62370bd2a6e35ce796647f79ccf9906147c91e8ceee31e401bdb7842371c6bee/diff:/var/lib/docker/overlay2/e673dacc1c6815100340b85af47aeb90eb5fca87778caec1d728de5b8cc9a36e/diff:/var/lib/docker/overlay2/bd17ea1d8cd8e2f88bd7fb4cee8a097365f6b81efc91f203a0504873fc0916a6/diff:/var/lib/docker/overlay2/d2f15007a2a5c037903647e5dd0d6882903fa163d23087bbd8eadeaf3618377b/diff:/var/lib/docker/overlay2/0bbc7fe1b1d62a2db9b4f402e6bc8781815951ae6df608307fd50a2fde242253/diff:/var/lib/docker/overlay2/d124fa0a0ea67ad0362eec0adf1f3e7cbd885b2cf4c31f83e917d97a09a791af/diff:/var/lib/d
ocker/overlay2/ee74e2f91490ecb544a95b306f1001046f3c4656413878d09be8bf67de7b4c4f/diff:/var/lib/docker/overlay2/4279b3790ea6aeb262c4ecd9cf4aae5beb1430f4fbb599b49ff27d0f7b3a9714/diff:/var/lib/docker/overlay2/b7fd6a0c88249dbf5e233463fbe08559ca287465617e7721977a002204ea3af5/diff:/var/lib/docker/overlay2/c495a83eeda1cf6df33d49341ee01f15738845e6330c0a5b3c29e11fdc4733b0/diff:/var/lib/docker/overlay2/ac747f0260d49943953568bbbe150f3a4f28d70bd82f40d0485ef13b12195044/diff:/var/lib/docker/overlay2/aa98d62ac831ecd60bc1acfa1708c0648c306bb7fa187026b472e9ae5c3364a4/diff:/var/lib/docker/overlay2/34829b132a53df856a1be03aa46565640e20cb075db18bd9775a5055fe0c0b22/diff:/var/lib/docker/overlay2/85a074fe6f79f3ea9d8b2f628355f41bb4f73b398257f8b6659bc171d86a0736/diff:/var/lib/docker/overlay2/c8c145d2e68e655880cd5c8fae8cb9f7cbd6b112f1f64fced224b17d4f60fbc7/diff:/var/lib/docker/overlay2/7480ad16aa2479be3569dd07eca685bc3a37a785e7ff281c448c7ca718cc67c3/diff:/var/lib/docker/overlay2/519f1304b1b8ee2daf8c1b9411f3e46d4fedacc8d6446937321372c4e8d
f2cb9/diff:/var/lib/docker/overlay2/246fcb20bef1dbfdc41186d1b7143566cd571a067830cc3f946b232024c2e85c/diff:/var/lib/docker/overlay2/f5f15e6d497abc56d9a2d901ed821a56e6f3effe2fc8d6c3ef64297faea15179/diff:/var/lib/docker/overlay2/3aa1fb1105e860c53ef63317f6757f9629a4a20f35764d976df2b0f0cee5d4f2/diff:/var/lib/docker/overlay2/765f7cba41acbb266d2cef89f2a76a5659b78c3b075223bf23257ac44acfe177/diff:/var/lib/docker/overlay2/53179410fe05d9ddea0a22ba2c123ca8e75f9c7839c2a64902e411e2bda2de23/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6c8f21f19a48e487593849b1d06a20a9bf9142e37e95ec4707bc3765f7e049ef/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6c8f21f19a48e487593849b1d06a20a9bf9142e37e95ec4707bc3765f7e049ef/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6c8f21f19a48e487593849b1d06a20a9bf9142e37e95ec4707bc3765f7e049ef/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20210811011523-1387367",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20210811011523-1387367/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20210811011523-1387367",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20210811011523-1387367",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20210811011523-1387367",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f2f03e3515d97b5a3e58e944c50f46d22ccfe797d00d1d40e45726faca300bb0",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50395"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50394"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50391"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50393"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50392"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/f2f03e3515d9",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20210811011523-1387367": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "bba6c5a68bfe",
	                        "old-k8s-version-20210811011523-1387367"
	                    ],
	                    "NetworkID": "f9263d0eb8195b3d75e38102721f923423021af8bf484b8d2a95b3aadb987266",
	                    "EndpointID": "24cd56ecd9abff4711f1582fd2589536026e1fc6ee24a461fe2bbae8bb5905b8",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-20210811011523-1387367 -n old-k8s-version-20210811011523-1387367
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-20210811011523-1387367 -n old-k8s-version-20210811011523-1387367: exit status 2 (334.598553ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-20210811011523-1387367 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 -p old-k8s-version-20210811011523-1387367 logs -n 25: exit status 110 (2.973479388s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                  Profile                  |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|-------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | -p                                                | cert-options-20210811012019-1387367       | jenkins | v1.22.0 | Wed, 11 Aug 2021 01:21:03 UTC | Wed, 11 Aug 2021 01:21:05 UTC |
	|         | cert-options-20210811012019-1387367               |                                           |         |         |                               |                               |
	| start   | -p                                                | running-upgrade-20210811012105-1387367    | jenkins | v1.22.0 | Wed, 11 Aug 2021 01:22:06 UTC | Wed, 11 Aug 2021 01:22:56 UTC |
	|         | running-upgrade-20210811012105-1387367            |                                           |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                           |         |         |                               |                               |
	|         | -v=1 --driver=docker                              |                                           |         |         |                               |                               |
	|         | --container-runtime=docker                        |                                           |         |         |                               |                               |
	| delete  | -p                                                | running-upgrade-20210811012105-1387367    | jenkins | v1.22.0 | Wed, 11 Aug 2021 01:22:56 UTC | Wed, 11 Aug 2021 01:22:59 UTC |
	|         | running-upgrade-20210811012105-1387367            |                                           |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | old-k8s-version-20210811011523-1387367    | jenkins | v1.22.0 | Wed, 11 Aug 2021 01:23:25 UTC | Wed, 11 Aug 2021 01:23:26 UTC |
	|         | old-k8s-version-20210811011523-1387367            |                                           |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                           |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                           |         |         |                               |                               |
	| stop    | -p                                                | old-k8s-version-20210811011523-1387367    | jenkins | v1.22.0 | Wed, 11 Aug 2021 01:23:26 UTC | Wed, 11 Aug 2021 01:23:37 UTC |
	|         | old-k8s-version-20210811011523-1387367            |                                           |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                           |         |         |                               |                               |
	| addons  | enable dashboard -p                               | old-k8s-version-20210811011523-1387367    | jenkins | v1.22.0 | Wed, 11 Aug 2021 01:23:37 UTC | Wed, 11 Aug 2021 01:23:37 UTC |
	|         | old-k8s-version-20210811011523-1387367            |                                           |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                           |         |         |                               |                               |
	| delete  | -p                                                | missing-upgrade-20210811012259-1387367    | jenkins | v1.22.0 | Wed, 11 Aug 2021 01:24:01 UTC | Wed, 11 Aug 2021 01:24:03 UTC |
	|         | missing-upgrade-20210811012259-1387367            |                                           |         |         |                               |                               |
	| start   | -p                                                | kubernetes-upgrade-20210811012403-1387367 | jenkins | v1.22.0 | Wed, 11 Aug 2021 01:24:03 UTC | Wed, 11 Aug 2021 01:25:02 UTC |
	|         | kubernetes-upgrade-20210811012403-1387367         |                                           |         |         |                               |                               |
	|         | --memory=2200                                     |                                           |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                      |                                           |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker            |                                           |         |         |                               |                               |
	|         | --container-runtime=docker                        |                                           |         |         |                               |                               |
	| stop    | -p                                                | kubernetes-upgrade-20210811012403-1387367 | jenkins | v1.22.0 | Wed, 11 Aug 2021 01:25:02 UTC | Wed, 11 Aug 2021 01:25:13 UTC |
	|         | kubernetes-upgrade-20210811012403-1387367         |                                           |         |         |                               |                               |
	| start   | -p                                                | kubernetes-upgrade-20210811012403-1387367 | jenkins | v1.22.0 | Wed, 11 Aug 2021 01:25:13 UTC | Wed, 11 Aug 2021 01:26:00 UTC |
	|         | kubernetes-upgrade-20210811012403-1387367         |                                           |         |         |                               |                               |
	|         | --memory=2200                                     |                                           |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                           |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker            |                                           |         |         |                               |                               |
	|         | --container-runtime=docker                        |                                           |         |         |                               |                               |
	| start   | -p                                                | kubernetes-upgrade-20210811012403-1387367 | jenkins | v1.22.0 | Wed, 11 Aug 2021 01:26:01 UTC | Wed, 11 Aug 2021 01:26:17 UTC |
	|         | kubernetes-upgrade-20210811012403-1387367         |                                           |         |         |                               |                               |
	|         | --memory=2200                                     |                                           |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                           |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker            |                                           |         |         |                               |                               |
	|         | --container-runtime=docker                        |                                           |         |         |                               |                               |
	| delete  | -p                                                | kubernetes-upgrade-20210811012403-1387367 | jenkins | v1.22.0 | Wed, 11 Aug 2021 01:26:17 UTC | Wed, 11 Aug 2021 01:26:20 UTC |
	|         | kubernetes-upgrade-20210811012403-1387367         |                                           |         |         |                               |                               |
	| start   | -p                                                | stopped-upgrade-20210811012620-1387367    | jenkins | v1.22.0 | Wed, 11 Aug 2021 01:27:09 UTC | Wed, 11 Aug 2021 01:27:47 UTC |
	|         | stopped-upgrade-20210811012620-1387367            |                                           |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                           |         |         |                               |                               |
	|         | -v=1 --driver=docker                              |                                           |         |         |                               |                               |
	|         | --container-runtime=docker                        |                                           |         |         |                               |                               |
	| logs    | -p                                                | stopped-upgrade-20210811012620-1387367    | jenkins | v1.22.0 | Wed, 11 Aug 2021 01:27:48 UTC | Wed, 11 Aug 2021 01:27:49 UTC |
	|         | stopped-upgrade-20210811012620-1387367            |                                           |         |         |                               |                               |
	| delete  | -p                                                | stopped-upgrade-20210811012620-1387367    | jenkins | v1.22.0 | Wed, 11 Aug 2021 01:27:49 UTC | Wed, 11 Aug 2021 01:27:51 UTC |
	|         | stopped-upgrade-20210811012620-1387367            |                                           |         |         |                               |                               |
	| start   | -p                                                | no-preload-20210811012751-1387367         | jenkins | v1.22.0 | Wed, 11 Aug 2021 01:27:51 UTC | Wed, 11 Aug 2021 01:29:11 UTC |
	|         | no-preload-20210811012751-1387367                 |                                           |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                           |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                           |         |         |                               |                               |
	|         | --driver=docker                                   |                                           |         |         |                               |                               |
	|         | --container-runtime=docker                        |                                           |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                           |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | no-preload-20210811012751-1387367         | jenkins | v1.22.0 | Wed, 11 Aug 2021 01:29:21 UTC | Wed, 11 Aug 2021 01:29:22 UTC |
	|         | no-preload-20210811012751-1387367                 |                                           |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                           |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                           |         |         |                               |                               |
	| stop    | -p                                                | no-preload-20210811012751-1387367         | jenkins | v1.22.0 | Wed, 11 Aug 2021 01:29:22 UTC | Wed, 11 Aug 2021 01:29:34 UTC |
	|         | no-preload-20210811012751-1387367                 |                                           |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                           |         |         |                               |                               |
	| addons  | enable dashboard -p                               | no-preload-20210811012751-1387367         | jenkins | v1.22.0 | Wed, 11 Aug 2021 01:29:34 UTC | Wed, 11 Aug 2021 01:29:34 UTC |
	|         | no-preload-20210811012751-1387367                 |                                           |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                           |         |         |                               |                               |
	| start   | -p                                                | no-preload-20210811012751-1387367         | jenkins | v1.22.0 | Wed, 11 Aug 2021 01:29:34 UTC | Wed, 11 Aug 2021 01:35:25 UTC |
	|         | no-preload-20210811012751-1387367                 |                                           |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                           |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                           |         |         |                               |                               |
	|         | --driver=docker                                   |                                           |         |         |                               |                               |
	|         | --container-runtime=docker                        |                                           |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                           |         |         |                               |                               |
	| ssh     | -p                                                | no-preload-20210811012751-1387367         | jenkins | v1.22.0 | Wed, 11 Aug 2021 01:35:43 UTC | Wed, 11 Aug 2021 01:35:43 UTC |
	|         | no-preload-20210811012751-1387367                 |                                           |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                           |         |         |                               |                               |
	| pause   | -p                                                | no-preload-20210811012751-1387367         | jenkins | v1.22.0 | Wed, 11 Aug 2021 01:35:43 UTC | Wed, 11 Aug 2021 01:35:44 UTC |
	|         | no-preload-20210811012751-1387367                 |                                           |         |         |                               |                               |
	|         | --alsologtostderr -v=1                            |                                           |         |         |                               |                               |
	| unpause | -p                                                | no-preload-20210811012751-1387367         | jenkins | v1.22.0 | Wed, 11 Aug 2021 01:35:45 UTC | Wed, 11 Aug 2021 01:35:46 UTC |
	|         | no-preload-20210811012751-1387367                 |                                           |         |         |                               |                               |
	|         | --alsologtostderr -v=1                            |                                           |         |         |                               |                               |
	| delete  | -p                                                | no-preload-20210811012751-1387367         | jenkins | v1.22.0 | Wed, 11 Aug 2021 01:35:47 UTC | Wed, 11 Aug 2021 01:35:49 UTC |
	|         | no-preload-20210811012751-1387367                 |                                           |         |         |                               |                               |
	| delete  | -p                                                | no-preload-20210811012751-1387367         | jenkins | v1.22.0 | Wed, 11 Aug 2021 01:35:50 UTC | Wed, 11 Aug 2021 01:35:50 UTC |
	|         | no-preload-20210811012751-1387367                 |                                           |         |         |                               |                               |
	|---------|---------------------------------------------------|-------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/11 01:35:50
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.16.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0811 01:35:50.316442 1653562 out.go:298] Setting OutFile to fd 1 ...
	I0811 01:35:50.316644 1653562 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0811 01:35:50.316654 1653562 out.go:311] Setting ErrFile to fd 2...
	I0811 01:35:50.316659 1653562 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0811 01:35:50.316797 1653562 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/bin
	I0811 01:35:50.317114 1653562 out.go:305] Setting JSON to false
	I0811 01:35:50.318282 1653562 start.go:111] hostinfo: {"hostname":"ip-172-31-31-251","uptime":40697,"bootTime":1628605053,"procs":336,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0811 01:35:50.318370 1653562 start.go:121] virtualization:  
	I0811 01:35:50.321516 1653562 out.go:177] * [embed-certs-20210811013550-1387367] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	I0811 01:35:50.324680 1653562 out.go:177]   - MINIKUBE_LOCATION=12230
	I0811 01:35:50.323087 1653562 notify.go:169] Checking for updates...
	I0811 01:35:50.326778 1653562 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	I0811 01:35:50.333083 1653562 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube
	I0811 01:35:50.335315 1653562 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0811 01:35:50.336119 1653562 driver.go:335] Setting default libvirt URI to qemu:///system
	I0811 01:35:50.387562 1653562 docker.go:132] docker version: linux-20.10.8
	I0811 01:35:50.387653 1653562 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0811 01:35:50.522991 1653562 info.go:263] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:46 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:39 SystemTime:2021-08-11 01:35:50.451841919 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226263040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0811 01:35:50.523449 1653562 docker.go:244] overlay module found
	I0811 01:35:50.530277 1653562 out.go:177] * Using the docker driver based on user configuration
	I0811 01:35:50.530304 1653562 start.go:278] selected driver: docker
	I0811 01:35:50.530312 1653562 start.go:751] validating driver "docker" against <nil>
	I0811 01:35:50.530330 1653562 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0811 01:35:50.530377 1653562 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0811 01:35:50.530388 1653562 out.go:242] ! Your cgroup does not allow setting memory.
	I0811 01:35:50.533053 1653562 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0811 01:35:50.533576 1653562 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0811 01:35:50.653353 1653562 info.go:263] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:46 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:39 SystemTime:2021-08-11 01:35:50.565711898 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226263040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0811 01:35:50.653464 1653562 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0811 01:35:50.653643 1653562 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0811 01:35:50.653668 1653562 cni.go:93] Creating CNI manager for ""
	I0811 01:35:50.653676 1653562 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0811 01:35:50.653681 1653562 start_flags.go:277] config:
	{Name:embed-certs-20210811013550-1387367 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:embed-certs-20210811013550-1387367 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0811 01:35:50.656505 1653562 out.go:177] * Starting control plane node embed-certs-20210811013550-1387367 in cluster embed-certs-20210811013550-1387367
	I0811 01:35:50.656549 1653562 cache.go:117] Beginning downloading kic base image for docker with docker
	I0811 01:35:50.658860 1653562 out.go:177] * Pulling base image ...
	I0811 01:35:50.658903 1653562 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime docker
	I0811 01:35:50.658957 1653562 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-docker-overlay2-arm64.tar.lz4
	I0811 01:35:50.658978 1653562 cache.go:56] Caching tarball of preloaded images
	I0811 01:35:50.659083 1653562 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	I0811 01:35:50.659676 1653562 preload.go:173] Found /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0811 01:35:50.659702 1653562 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on docker
	I0811 01:35:50.659820 1653562 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/embed-certs-20210811013550-1387367/config.json ...
	I0811 01:35:50.659853 1653562 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/embed-certs-20210811013550-1387367/config.json: {Name:mkc966261f416d1827d3e1f1872383bed3e9a77d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 01:35:50.739606 1653562 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0811 01:35:50.739636 1653562 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0811 01:35:50.739651 1653562 cache.go:205] Successfully downloaded all kic artifacts
	I0811 01:35:50.739692 1653562 start.go:313] acquiring machines lock for embed-certs-20210811013550-1387367: {Name:mkfef6984eb6ed3f07d6d1ae8619edefa9f177b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0811 01:35:50.739817 1653562 start.go:317] acquired machines lock for "embed-certs-20210811013550-1387367" in 103.031µs
	I0811 01:35:50.739855 1653562 start.go:89] Provisioning new machine with config: &{Name:embed-certs-20210811013550-1387367 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:embed-certs-20210811013550-1387367 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0811 01:35:50.739957 1653562 start.go:126] createHost starting for "" (driver="docker")
	I0811 01:35:50.743031 1653562 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0811 01:35:50.743316 1653562 start.go:160] libmachine.API.Create for "embed-certs-20210811013550-1387367" (driver="docker")
	I0811 01:35:50.743355 1653562 client.go:168] LocalClient.Create starting
	I0811 01:35:50.743437 1653562 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem
	I0811 01:35:50.743471 1653562 main.go:130] libmachine: Decoding PEM data...
	I0811 01:35:50.743491 1653562 main.go:130] libmachine: Parsing certificate...
	I0811 01:35:50.743612 1653562 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem
	I0811 01:35:50.743633 1653562 main.go:130] libmachine: Decoding PEM data...
	I0811 01:35:50.743649 1653562 main.go:130] libmachine: Parsing certificate...
	I0811 01:35:50.744034 1653562 cli_runner.go:115] Run: docker network inspect embed-certs-20210811013550-1387367 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0811 01:35:50.794054 1653562 cli_runner.go:162] docker network inspect embed-certs-20210811013550-1387367 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0811 01:35:50.794133 1653562 network_create.go:255] running [docker network inspect embed-certs-20210811013550-1387367] to gather additional debugging logs...
	I0811 01:35:50.794156 1653562 cli_runner.go:115] Run: docker network inspect embed-certs-20210811013550-1387367
	W0811 01:35:50.837429 1653562 cli_runner.go:162] docker network inspect embed-certs-20210811013550-1387367 returned with exit code 1
	I0811 01:35:50.837461 1653562 network_create.go:258] error running [docker network inspect embed-certs-20210811013550-1387367]: docker network inspect embed-certs-20210811013550-1387367: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: embed-certs-20210811013550-1387367
	I0811 01:35:50.837477 1653562 network_create.go:260] output of [docker network inspect embed-certs-20210811013550-1387367]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: embed-certs-20210811013550-1387367
	
	** /stderr **
	I0811 01:35:50.837537 1653562 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0811 01:35:50.890345 1653562 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0x40005b5208] misses:0}
	I0811 01:35:50.890400 1653562 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0811 01:35:50.890420 1653562 network_create.go:106] attempt to create docker network embed-certs-20210811013550-1387367 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0811 01:35:50.890498 1653562 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20210811013550-1387367
	I0811 01:35:50.986626 1653562 network_create.go:90] docker network embed-certs-20210811013550-1387367 192.168.49.0/24 created
	I0811 01:35:50.986654 1653562 kic.go:106] calculated static IP "192.168.49.2" for the "embed-certs-20210811013550-1387367" container
	I0811 01:35:50.986719 1653562 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0811 01:35:51.037185 1653562 cli_runner.go:115] Run: docker volume create embed-certs-20210811013550-1387367 --label name.minikube.sigs.k8s.io=embed-certs-20210811013550-1387367 --label created_by.minikube.sigs.k8s.io=true
	I0811 01:35:51.084166 1653562 oci.go:102] Successfully created a docker volume embed-certs-20210811013550-1387367
	I0811 01:35:51.084252 1653562 cli_runner.go:115] Run: docker run --rm --name embed-certs-20210811013550-1387367-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-20210811013550-1387367 --entrypoint /usr/bin/test -v embed-certs-20210811013550-1387367:/var gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -d /var/lib
	I0811 01:35:51.746718 1653562 oci.go:106] Successfully prepared a docker volume embed-certs-20210811013550-1387367
	W0811 01:35:51.746782 1653562 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0811 01:35:51.746793 1653562 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0811 01:35:51.746861 1653562 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0811 01:35:51.747070 1653562 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime docker
	I0811 01:35:51.747093 1653562 kic.go:179] Starting extracting preloaded images to volume ...
	I0811 01:35:51.747137 1653562 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-20210811013550-1387367:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir
	I0811 01:35:51.882475 1653562 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-20210811013550-1387367 --name embed-certs-20210811013550-1387367 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-20210811013550-1387367 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-20210811013550-1387367 --network embed-certs-20210811013550-1387367 --ip 192.168.49.2 --volume embed-certs-20210811013550-1387367:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79
	I0811 01:35:52.551033 1653562 cli_runner.go:115] Run: docker container inspect embed-certs-20210811013550-1387367 --format={{.State.Running}}
	I0811 01:35:52.611725 1653562 cli_runner.go:115] Run: docker container inspect embed-certs-20210811013550-1387367 --format={{.State.Status}}
	I0811 01:35:52.676118 1653562 cli_runner.go:115] Run: docker exec embed-certs-20210811013550-1387367 stat /var/lib/dpkg/alternatives/iptables
	I0811 01:35:52.830297 1653562 oci.go:278] the created container "embed-certs-20210811013550-1387367" has a running status.
	I0811 01:35:52.830323 1653562 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/embed-certs-20210811013550-1387367/id_rsa...
	I0811 01:35:53.625855 1653562 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/embed-certs-20210811013550-1387367/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0811 01:35:53.778329 1653562 cli_runner.go:115] Run: docker container inspect embed-certs-20210811013550-1387367 --format={{.State.Status}}
	I0811 01:35:53.838237 1653562 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0811 01:35:53.838261 1653562 kic_runner.go:115] Args: [docker exec --privileged embed-certs-20210811013550-1387367 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0811 01:36:05.292492 1653562 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-20210811013550-1387367:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir: (13.545319589s)
	I0811 01:36:05.292523 1653562 kic.go:188] duration metric: took 13.545428 seconds to extract preloaded images to volume
	I0811 01:36:05.292604 1653562 cli_runner.go:115] Run: docker container inspect embed-certs-20210811013550-1387367 --format={{.State.Status}}
	I0811 01:36:05.328362 1653562 machine.go:88] provisioning docker machine ...
	I0811 01:36:05.328399 1653562 ubuntu.go:169] provisioning hostname "embed-certs-20210811013550-1387367"
	I0811 01:36:05.328472 1653562 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210811013550-1387367
	I0811 01:36:05.364858 1653562 main.go:130] libmachine: Using SSH client type: native
	I0811 01:36:05.365089 1653562 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil>  [] 0s} 127.0.0.1 50440 <nil> <nil>}
	I0811 01:36:05.365110 1653562 main.go:130] libmachine: About to run SSH command:
	sudo hostname embed-certs-20210811013550-1387367 && echo "embed-certs-20210811013550-1387367" | sudo tee /etc/hostname
	I0811 01:36:05.489670 1653562 main.go:130] libmachine: SSH cmd err, output: <nil>: embed-certs-20210811013550-1387367
	
	I0811 01:36:05.489752 1653562 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210811013550-1387367
	I0811 01:36:05.523504 1653562 main.go:130] libmachine: Using SSH client type: native
	I0811 01:36:05.523683 1653562 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil>  [] 0s} 127.0.0.1 50440 <nil> <nil>}
	I0811 01:36:05.523720 1653562 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20210811013550-1387367' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20210811013550-1387367/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20210811013550-1387367' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0811 01:36:05.648565 1653562 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0811 01:36:05.648656 1653562 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/k
ey.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube}
	I0811 01:36:05.648702 1653562 ubuntu.go:177] setting up certificates
	I0811 01:36:05.648726 1653562 provision.go:83] configureAuth start
	I0811 01:36:05.648812 1653562 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20210811013550-1387367
	I0811 01:36:05.688980 1653562 provision.go:137] copyHostCerts
	I0811 01:36:05.689254 1653562 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem, removing ...
	I0811 01:36:05.689273 1653562 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem
	I0811 01:36:05.689343 1653562 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem (1082 bytes)
	I0811 01:36:05.689418 1653562 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem, removing ...
	I0811 01:36:05.689424 1653562 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem
	I0811 01:36:05.689446 1653562 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem (1123 bytes)
	I0811 01:36:05.689494 1653562 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem, removing ...
	I0811 01:36:05.689499 1653562 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem
	I0811 01:36:05.689518 1653562 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem (1679 bytes)
	I0811 01:36:05.689581 1653562 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20210811013550-1387367 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20210811013550-1387367]
	I0811 01:36:06.004151 1653562 provision.go:171] copyRemoteCerts
	I0811 01:36:06.004222 1653562 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0811 01:36:06.004269 1653562 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210811013550-1387367
	I0811 01:36:06.050680 1653562 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50440 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/embed-certs-20210811013550-1387367/id_rsa Username:docker}
	I0811 01:36:06.149005 1653562 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0811 01:36:06.181398 1653562 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem --> /etc/docker/server.pem (1273 bytes)
	I0811 01:36:06.213439 1653562 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0811 01:36:06.241449 1653562 provision.go:86] duration metric: configureAuth took 592.698701ms
	I0811 01:36:06.241475 1653562 ubuntu.go:193] setting minikube options for container-runtime
	I0811 01:36:06.241736 1653562 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210811013550-1387367
	I0811 01:36:06.300812 1653562 main.go:130] libmachine: Using SSH client type: native
	I0811 01:36:06.301059 1653562 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil>  [] 0s} 127.0.0.1 50440 <nil> <nil>}
	I0811 01:36:06.301077 1653562 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0811 01:36:06.437432 1653562 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0811 01:36:06.437456 1653562 ubuntu.go:71] root file system type: overlay
	I0811 01:36:06.437624 1653562 provision.go:308] Updating docker unit: /lib/systemd/system/docker.service ...
	I0811 01:36:06.437691 1653562 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210811013550-1387367
	I0811 01:36:06.472411 1653562 main.go:130] libmachine: Using SSH client type: native
	I0811 01:36:06.472593 1653562 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil>  [] 0s} 127.0.0.1 50440 <nil> <nil>}
	I0811 01:36:06.472696 1653562 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0811 01:36:06.598910 1653562 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0811 01:36:06.599003 1653562 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210811013550-1387367
	I0811 01:36:06.634895 1653562 main.go:130] libmachine: Using SSH client type: native
	I0811 01:36:06.635068 1653562 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil>  [] 0s} 127.0.0.1 50440 <nil> <nil>}
	I0811 01:36:06.635093 1653562 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0811 01:36:07.635971 1653562 main.go:130] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-06-02 11:55:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2021-08-11 01:36:06.596514881 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	+BindsTo=containerd.service
	 After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0811 01:36:07.636001 1653562 machine.go:91] provisioned docker machine in 2.307614878s
	I0811 01:36:07.636011 1653562 client.go:171] LocalClient.Create took 16.892646931s
	I0811 01:36:07.636020 1653562 start.go:168] duration metric: libmachine.API.Create for "embed-certs-20210811013550-1387367" took 16.892705301s
	I0811 01:36:07.636028 1653562 start.go:267] post-start starting for "embed-certs-20210811013550-1387367" (driver="docker")
	I0811 01:36:07.636034 1653562 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0811 01:36:07.636092 1653562 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0811 01:36:07.636154 1653562 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210811013550-1387367
	I0811 01:36:07.687450 1653562 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50440 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/embed-certs-20210811013550-1387367/id_rsa Username:docker}
	I0811 01:36:07.792442 1653562 ssh_runner.go:149] Run: cat /etc/os-release
	I0811 01:36:07.795397 1653562 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0811 01:36:07.795423 1653562 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0811 01:36:07.795434 1653562 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0811 01:36:07.795441 1653562 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0811 01:36:07.795450 1653562 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/addons for local assets ...
	I0811 01:36:07.795505 1653562 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files for local assets ...
	I0811 01:36:07.795587 1653562 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/13873672.pem -> 13873672.pem in /etc/ssl/certs
	I0811 01:36:07.795681 1653562 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0811 01:36:07.802784 1653562 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/13873672.pem --> /etc/ssl/certs/13873672.pem (1708 bytes)
	I0811 01:36:07.823056 1653562 start.go:270] post-start completed in 187.011728ms
	I0811 01:36:07.823476 1653562 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20210811013550-1387367
	I0811 01:36:07.858524 1653562 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/embed-certs-20210811013550-1387367/config.json ...
	I0811 01:36:07.858806 1653562 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0811 01:36:07.858856 1653562 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210811013550-1387367
	I0811 01:36:07.891308 1653562 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50440 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/embed-certs-20210811013550-1387367/id_rsa Username:docker}
	I0811 01:36:07.973143 1653562 start.go:129] duration metric: createHost completed in 17.233168251s
	I0811 01:36:07.973168 1653562 start.go:80] releasing machines lock for "embed-certs-20210811013550-1387367", held for 17.233335266s
	I0811 01:36:07.973259 1653562 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20210811013550-1387367
	I0811 01:36:08.005557 1653562 ssh_runner.go:149] Run: systemctl --version
	I0811 01:36:08.005609 1653562 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210811013550-1387367
	I0811 01:36:08.005612 1653562 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0811 01:36:08.005669 1653562 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210811013550-1387367
	I0811 01:36:08.069898 1653562 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50440 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/embed-certs-20210811013550-1387367/id_rsa Username:docker}
	I0811 01:36:08.070132 1653562 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50440 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/embed-certs-20210811013550-1387367/id_rsa Username:docker}
	I0811 01:36:08.327428 1653562 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0811 01:36:08.336898 1653562 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0811 01:36:08.346784 1653562 cruntime.go:249] skipping containerd shutdown because we are bound to it
	I0811 01:36:08.346852 1653562 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0811 01:36:08.356665 1653562 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0811 01:36:08.369412 1653562 ssh_runner.go:149] Run: sudo systemctl unmask docker.service
	I0811 01:36:08.459542 1653562 ssh_runner.go:149] Run: sudo systemctl enable docker.socket
	I0811 01:36:08.544429 1653562 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0811 01:36:08.553785 1653562 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0811 01:36:08.654852 1653562 ssh_runner.go:149] Run: sudo systemctl start docker
	I0811 01:36:08.667843 1653562 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0811 01:36:08.730613 1653562 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0811 01:36:08.810820 1653562 out.go:204] * Preparing Kubernetes v1.21.3 on Docker 20.10.7 ...
	I0811 01:36:08.810938 1653562 cli_runner.go:115] Run: docker network inspect embed-certs-20210811013550-1387367 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0811 01:36:08.868837 1653562 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0811 01:36:08.872335 1653562 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0811 01:36:08.882154 1653562 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime docker
	I0811 01:36:08.882225 1653562 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0811 01:36:08.940316 1653562 docker.go:535] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.21.3
	k8s.gcr.io/kube-controller-manager:v1.21.3
	k8s.gcr.io/kube-proxy:v1.21.3
	k8s.gcr.io/kube-scheduler:v1.21.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.4.1
	kubernetesui/dashboard:v2.1.0
	k8s.gcr.io/coredns/coredns:v1.8.0
	k8s.gcr.io/etcd:3.4.13-0
	kubernetesui/metrics-scraper:v1.0.4
	
	-- /stdout --
	I0811 01:36:08.940343 1653562 docker.go:466] Images already preloaded, skipping extraction
	I0811 01:36:08.940400 1653562 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0811 01:36:09.000182 1653562 docker.go:535] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.21.3
	k8s.gcr.io/kube-proxy:v1.21.3
	k8s.gcr.io/kube-controller-manager:v1.21.3
	k8s.gcr.io/kube-scheduler:v1.21.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.4.1
	kubernetesui/dashboard:v2.1.0
	k8s.gcr.io/coredns/coredns:v1.8.0
	k8s.gcr.io/etcd:3.4.13-0
	kubernetesui/metrics-scraper:v1.0.4
	
	-- /stdout --
	I0811 01:36:09.000207 1653562 cache_images.go:74] Images are preloaded, skipping loading
	I0811 01:36:09.000264 1653562 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}}
	I0811 01:36:09.136765 1653562 cni.go:93] Creating CNI manager for ""
	I0811 01:36:09.136793 1653562 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0811 01:36:09.136807 1653562 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0811 01:36:09.136853 1653562 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20210811013550-1387367 NodeName:embed-certs-20210811013550-1387367 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCA
File:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0811 01:36:09.137002 1653562 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "embed-certs-20210811013550-1387367"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0811 01:36:09.137094 1653562 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=embed-certs-20210811013550-1387367 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:embed-certs-20210811013550-1387367 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0811 01:36:09.137160 1653562 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0811 01:36:09.149993 1653562 binaries.go:44] Found k8s binaries, skipping transfer
	I0811 01:36:09.150069 1653562 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0811 01:36:09.161331 1653562 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (360 bytes)
	I0811 01:36:09.182972 1653562 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0811 01:36:09.207945 1653562 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2077 bytes)
	I0811 01:36:09.229341 1653562 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0811 01:36:09.233390 1653562 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0811 01:36:09.244657 1653562 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/embed-certs-20210811013550-1387367 for IP: 192.168.49.2
	I0811 01:36:09.244716 1653562 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key
	I0811 01:36:09.244737 1653562 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key
	I0811 01:36:09.244794 1653562 certs.go:294] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/embed-certs-20210811013550-1387367/client.key
	I0811 01:36:09.244803 1653562 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/embed-certs-20210811013550-1387367/client.crt with IP's: []
	I0811 01:36:10.266596 1653562 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/embed-certs-20210811013550-1387367/client.crt ...
	I0811 01:36:10.266665 1653562 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/embed-certs-20210811013550-1387367/client.crt: {Name:mkebec6cfb47bd399809ecd7bb2d45c4ab2923b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 01:36:10.266914 1653562 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/embed-certs-20210811013550-1387367/client.key ...
	I0811 01:36:10.266952 1653562 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/embed-certs-20210811013550-1387367/client.key: {Name:mk972e147c6096ed5e8e85e448afb3d8a5a31c96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 01:36:10.267093 1653562 certs.go:294] generating minikube signed cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/embed-certs-20210811013550-1387367/apiserver.key.dd3b5fb2
	I0811 01:36:10.267127 1653562 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/embed-certs-20210811013550-1387367/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0811 01:36:10.861789 1653562 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/embed-certs-20210811013550-1387367/apiserver.crt.dd3b5fb2 ...
	I0811 01:36:10.861858 1653562 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/embed-certs-20210811013550-1387367/apiserver.crt.dd3b5fb2: {Name:mkd4658e1cfe1a4c4d0b379c2a416af27c1dc41d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 01:36:10.862095 1653562 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/embed-certs-20210811013550-1387367/apiserver.key.dd3b5fb2 ...
	I0811 01:36:10.862130 1653562 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/embed-certs-20210811013550-1387367/apiserver.key.dd3b5fb2: {Name:mk604326a6ed9d5c5e8f022fa1a1832a1d302ed2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 01:36:10.862256 1653562 certs.go:305] copying /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/embed-certs-20210811013550-1387367/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/embed-certs-20210811013550-1387367/apiserver.crt
	I0811 01:36:10.862342 1653562 certs.go:309] copying /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/embed-certs-20210811013550-1387367/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/embed-certs-20210811013550-1387367/apiserver.key
	I0811 01:36:10.862411 1653562 certs.go:294] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/embed-certs-20210811013550-1387367/proxy-client.key
	I0811 01:36:10.862437 1653562 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/embed-certs-20210811013550-1387367/proxy-client.crt with IP's: []
	I0811 01:36:11.366506 1653562 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/embed-certs-20210811013550-1387367/proxy-client.crt ...
	I0811 01:36:11.366542 1653562 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/embed-certs-20210811013550-1387367/proxy-client.crt: {Name:mkd36927dc37a887492775018a1673be44ca0185 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 01:36:11.366792 1653562 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/embed-certs-20210811013550-1387367/proxy-client.key ...
	I0811 01:36:11.366810 1653562 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/embed-certs-20210811013550-1387367/proxy-client.key: {Name:mk075a391d8c788f1ab975e38794ccc592925a48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 01:36:11.366993 1653562 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/1387367.pem (1338 bytes)
	W0811 01:36:11.367035 1653562 certs.go:369] ignoring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/1387367_empty.pem, impossibly tiny 0 bytes
	I0811 01:36:11.367050 1653562 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem (1675 bytes)
	I0811 01:36:11.367079 1653562 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem (1082 bytes)
	I0811 01:36:11.367102 1653562 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem (1123 bytes)
	I0811 01:36:11.367129 1653562 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem (1679 bytes)
	I0811 01:36:11.367179 1653562 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/13873672.pem (1708 bytes)
	I0811 01:36:11.368289 1653562 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/embed-certs-20210811013550-1387367/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0811 01:36:11.385731 1653562 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/embed-certs-20210811013550-1387367/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0811 01:36:11.403830 1653562 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/embed-certs-20210811013550-1387367/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0811 01:36:11.424222 1653562 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/embed-certs-20210811013550-1387367/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0811 01:36:11.440979 1653562 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0811 01:36:11.457970 1653562 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0811 01:36:11.475022 1653562 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0811 01:36:11.491999 1653562 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0811 01:36:11.509438 1653562 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0811 01:36:11.526350 1653562 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/1387367.pem --> /usr/share/ca-certificates/1387367.pem (1338 bytes)
	I0811 01:36:11.542863 1653562 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/13873672.pem --> /usr/share/ca-certificates/13873672.pem (1708 bytes)
	I0811 01:36:11.559081 1653562 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0811 01:36:11.571032 1653562 ssh_runner.go:149] Run: openssl version
	I0811 01:36:11.575890 1653562 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0811 01:36:11.583153 1653562 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0811 01:36:11.586875 1653562 certs.go:416] hashing: -rw-r--r-- 1 root root 1111 Aug 11 00:30 /usr/share/ca-certificates/minikubeCA.pem
	I0811 01:36:11.586954 1653562 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0811 01:36:11.592270 1653562 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0811 01:36:11.601723 1653562 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1387367.pem && ln -fs /usr/share/ca-certificates/1387367.pem /etc/ssl/certs/1387367.pem"
	I0811 01:36:11.608773 1653562 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1387367.pem
	I0811 01:36:11.612097 1653562 certs.go:416] hashing: -rw-r--r-- 1 root root 1338 Aug 11 00:46 /usr/share/ca-certificates/1387367.pem
	I0811 01:36:11.612164 1653562 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1387367.pem
	I0811 01:36:11.617376 1653562 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1387367.pem /etc/ssl/certs/51391683.0"
	I0811 01:36:11.624583 1653562 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13873672.pem && ln -fs /usr/share/ca-certificates/13873672.pem /etc/ssl/certs/13873672.pem"
	I0811 01:36:11.631540 1653562 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/13873672.pem
	I0811 01:36:11.634805 1653562 certs.go:416] hashing: -rw-r--r-- 1 root root 1708 Aug 11 00:46 /usr/share/ca-certificates/13873672.pem
	I0811 01:36:11.634857 1653562 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13873672.pem
	I0811 01:36:11.641598 1653562 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/13873672.pem /etc/ssl/certs/3ec20f2e.0"
	I0811 01:36:11.648814 1653562 kubeadm.go:390] StartCluster: {Name:embed-certs-20210811013550-1387367 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:embed-certs-20210811013550-1387367 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0811 01:36:11.648958 1653562 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0811 01:36:11.699181 1653562 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0811 01:36:11.709744 1653562 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0811 01:36:11.717609 1653562 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0811 01:36:11.717720 1653562 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0811 01:36:11.727398 1653562 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0811 01:36:11.727482 1653562 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0811 01:36:12.748127 1653562 out.go:204]   - Generating certificates and keys ...
	I0811 01:36:19.902479 1653562 out.go:204]   - Booting up control plane ...
	I0811 01:36:37.468788 1653562 out.go:204]   - Configuring RBAC rules ...
	I0811 01:36:38.109656 1653562 cni.go:93] Creating CNI manager for ""
	I0811 01:36:38.109720 1653562 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0811 01:36:38.109757 1653562 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0811 01:36:38.109831 1653562 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 01:36:38.109865 1653562 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=877a5691753f15214a0c269ac69dcdc5a4d99fcd minikube.k8s.io/name=embed-certs-20210811013550-1387367 minikube.k8s.io/updated_at=2021_08_11T01_36_38_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 01:36:38.762263 1653562 ops.go:34] apiserver oom_adj: -16
	I0811 01:36:38.762363 1653562 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 01:36:39.346085 1653562 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 01:36:39.846182 1653562 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 01:36:40.346009 1653562 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 01:36:40.846158 1653562 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 01:36:41.345575 1653562 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 01:36:41.845625 1653562 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 01:36:42.346064 1653562 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 01:36:42.845606 1653562 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 01:36:43.346158 1653562 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 01:36:43.845616 1653562 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 01:36:44.346302 1653562 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 01:36:44.845601 1653562 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 01:36:45.345832 1653562 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 01:36:45.846051 1653562 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 01:36:46.345775 1653562 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 01:36:46.845655 1653562 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 01:36:47.346181 1653562 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 01:36:47.846171 1653562 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 01:36:48.345911 1653562 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 01:36:48.846520 1653562 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 01:36:49.345810 1653562 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 01:36:49.846155 1653562 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 01:36:50.346357 1653562 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 01:36:50.845914 1653562 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 01:36:51.345624 1653562 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 01:36:51.552161 1653562 kubeadm.go:985] duration metric: took 13.442379394s to wait for elevateKubeSystemPrivileges.
	I0811 01:36:51.552184 1653562 kubeadm.go:392] StartCluster complete in 39.903378826s
	I0811 01:36:51.552200 1653562 settings.go:142] acquiring lock: {Name:mk6e7f1e95cc0d18801bf31166529399345d1e40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 01:36:51.552283 1653562 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	I0811 01:36:51.554067 1653562 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig: {Name:mka174137207b71bb699e0c641682c96161f87c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 01:36:52.149876 1653562 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20210811013550-1387367" rescaled to 1
	I0811 01:36:52.149972 1653562 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0811 01:36:52.154315 1653562 out.go:177] * Verifying Kubernetes components...
	I0811 01:36:52.150032 1653562 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0811 01:36:52.150253 1653562 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0811 01:36:52.154685 1653562 addons.go:59] Setting storage-provisioner=true in profile "embed-certs-20210811013550-1387367"
	I0811 01:36:52.154746 1653562 addons.go:135] Setting addon storage-provisioner=true in "embed-certs-20210811013550-1387367"
	W0811 01:36:52.154769 1653562 addons.go:147] addon storage-provisioner should already be in state true
	I0811 01:36:52.154832 1653562 host.go:66] Checking if "embed-certs-20210811013550-1387367" exists ...
	I0811 01:36:52.154535 1653562 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0811 01:36:52.154725 1653562 addons.go:59] Setting default-storageclass=true in profile "embed-certs-20210811013550-1387367"
	I0811 01:36:52.154918 1653562 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20210811013550-1387367"
	I0811 01:36:52.155261 1653562 cli_runner.go:115] Run: docker container inspect embed-certs-20210811013550-1387367 --format={{.State.Status}}
	I0811 01:36:52.155900 1653562 cli_runner.go:115] Run: docker container inspect embed-certs-20210811013550-1387367 --format={{.State.Status}}
	I0811 01:36:52.235806 1653562 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0811 01:36:52.235925 1653562 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0811 01:36:52.235935 1653562 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0811 01:36:52.235992 1653562 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210811013550-1387367
	I0811 01:36:52.276356 1653562 addons.go:135] Setting addon default-storageclass=true in "embed-certs-20210811013550-1387367"
	W0811 01:36:52.276379 1653562 addons.go:147] addon default-storageclass should already be in state true
	I0811 01:36:52.276405 1653562 host.go:66] Checking if "embed-certs-20210811013550-1387367" exists ...
	I0811 01:36:52.276869 1653562 cli_runner.go:115] Run: docker container inspect embed-certs-20210811013550-1387367 --format={{.State.Status}}
	I0811 01:36:52.346005 1653562 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50440 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/embed-certs-20210811013550-1387367/id_rsa Username:docker}
	I0811 01:36:52.371952 1653562 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0811 01:36:52.371971 1653562 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0811 01:36:52.372026 1653562 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210811013550-1387367
	I0811 01:36:52.439359 1653562 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50440 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/embed-certs-20210811013550-1387367/id_rsa Username:docker}
	I0811 01:36:52.734659 1653562 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0811 01:36:52.759149 1653562 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20210811013550-1387367" to be "Ready" ...
	I0811 01:36:52.759488 1653562 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0811 01:36:52.763828 1653562 node_ready.go:49] node "embed-certs-20210811013550-1387367" has status "Ready":"True"
	I0811 01:36:52.763890 1653562 node_ready.go:38] duration metric: took 4.675252ms waiting for node "embed-certs-20210811013550-1387367" to be "Ready" ...
	I0811 01:36:52.763913 1653562 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0811 01:36:52.776064 1653562 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-blnds" in "kube-system" namespace to be "Ready" ...
	I0811 01:36:52.882411 1653562 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0811 01:36:54.389582 1653562 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.630041943s)
	I0811 01:36:54.389611 1653562 start.go:736] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0811 01:36:54.389645 1653562 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.654946004s)
	I0811 01:36:54.470160 1653562 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.587703762s)
	I0811 01:36:54.472765 1653562 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0811 01:36:54.472792 1653562 addons.go:344] enableAddons completed in 2.322542159s
	I0811 01:36:54.809743 1653562 pod_ready.go:102] pod "coredns-558bd4d5db-blnds" in "kube-system" namespace has status "Ready":"False"
	I0811 01:36:57.309697 1653562 pod_ready.go:102] pod "coredns-558bd4d5db-blnds" in "kube-system" namespace has status "Ready":"False"
	I0811 01:36:59.808850 1653562 pod_ready.go:102] pod "coredns-558bd4d5db-blnds" in "kube-system" namespace has status "Ready":"False"
	I0811 01:37:00.249336 1560222 kubeadm.go:392] StartCluster complete in 13m15.354012512s
	I0811 01:37:00.249483 1560222 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0811 01:37:00.341747 1560222 logs.go:270] 0 containers: []
	W0811 01:37:00.341772 1560222 logs.go:272] No container was found matching "kube-apiserver"
	I0811 01:37:00.341821 1560222 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0811 01:37:00.397885 1560222 logs.go:270] 0 containers: []
	W0811 01:37:00.397920 1560222 logs.go:272] No container was found matching "etcd"
	I0811 01:37:00.397977 1560222 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0811 01:37:00.460187 1560222 logs.go:270] 0 containers: []
	W0811 01:37:00.460205 1560222 logs.go:272] No container was found matching "coredns"
	I0811 01:37:00.460259 1560222 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0811 01:37:00.504486 1560222 logs.go:270] 0 containers: []
	W0811 01:37:00.504508 1560222 logs.go:272] No container was found matching "kube-scheduler"
	I0811 01:37:00.504560 1560222 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0811 01:37:00.544873 1560222 logs.go:270] 0 containers: []
	W0811 01:37:00.544892 1560222 logs.go:272] No container was found matching "kube-proxy"
	I0811 01:37:00.544945 1560222 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0811 01:37:00.590947 1560222 logs.go:270] 0 containers: []
	W0811 01:37:00.590965 1560222 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0811 01:37:00.591022 1560222 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0811 01:37:00.645461 1560222 logs.go:270] 0 containers: []
	W0811 01:37:00.645486 1560222 logs.go:272] No container was found matching "storage-provisioner"
	I0811 01:37:00.645543 1560222 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0811 01:37:00.685049 1560222 logs.go:270] 0 containers: []
	W0811 01:37:00.685068 1560222 logs.go:272] No container was found matching "kube-controller-manager"
	I0811 01:37:00.685079 1560222 logs.go:123] Gathering logs for kubelet ...
	I0811 01:37:00.685090 1560222 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0811 01:37:00.707041 1560222 logs.go:138] Found kubelet problem: Aug 11 01:36:52 old-k8s-version-20210811011523-1387367 kubelet[69013]: F0811 01:36:52.633715   69013 kubelet.go:1359] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level BestEffort QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods besteffort]: Failed to set config for supported subsystems : failed to write 4611686018427387904 to hugetlb.64kB.limit_in_bytes: open /sys/fs/cgroup/hugetlb/kubepods/besteffort/hugetlb.64kB.limit_in_bytes: permission denied
	W0811 01:37:00.719352 1560222 logs.go:138] Found kubelet problem: Aug 11 01:36:54 old-k8s-version-20210811011523-1387367 kubelet[69218]: F0811 01:36:54.420969   69218 kubelet.go:1359] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level Burstable QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods burstable]: Failed to set config for supported subsystems : failed to write 4611686018427387904 to hugetlb.64kB.limit_in_bytes: open /sys/fs/cgroup/hugetlb/kubepods/burstable/hugetlb.64kB.limit_in_bytes: permission denied
	W0811 01:37:00.731093 1560222 logs.go:138] Found kubelet problem: Aug 11 01:36:56 old-k8s-version-20210811011523-1387367 kubelet[69428]: F0811 01:36:56.035361   69428 kubelet.go:1359] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level Burstable QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods burstable]: Failed to set config for supported subsystems : failed to write 4611686018427387904 to hugetlb.64kB.limit_in_bytes: open /sys/fs/cgroup/hugetlb/kubepods/burstable/hugetlb.64kB.limit_in_bytes: permission denied
	W0811 01:37:00.742824 1560222 logs.go:138] Found kubelet problem: Aug 11 01:36:57 old-k8s-version-20210811011523-1387367 kubelet[69625]: F0811 01:36:57.459475   69625 kubelet.go:1359] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level BestEffort QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods besteffort]: Failed to set config for supported subsystems : failed to write 4611686018427387904 to hugetlb.64kB.limit_in_bytes: open /sys/fs/cgroup/hugetlb/kubepods/besteffort/hugetlb.64kB.limit_in_bytes: permission denied
	W0811 01:37:00.754569 1560222 logs.go:138] Found kubelet problem: Aug 11 01:36:58 old-k8s-version-20210811011523-1387367 kubelet[69813]: F0811 01:36:58.939561   69813 kubelet.go:1359] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level Burstable QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods burstable]: Failed to set config for supported subsystems : failed to write 4611686018427387904 to hugetlb.64kB.limit_in_bytes: open /sys/fs/cgroup/hugetlb/kubepods/burstable/hugetlb.64kB.limit_in_bytes: permission denied
	W0811 01:37:00.766591 1560222 logs.go:138] Found kubelet problem: Aug 11 01:37:00 old-k8s-version-20210811011523-1387367 kubelet[70004]: F0811 01:37:00.470711   70004 kubelet.go:1359] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level Burstable QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods burstable]: Failed to set config for supported subsystems : failed to write 4611686018427387904 to hugetlb.64kB.limit_in_bytes: open /sys/fs/cgroup/hugetlb/kubepods/burstable/hugetlb.64kB.limit_in_bytes: permission denied
	I0811 01:37:00.766769 1560222 logs.go:123] Gathering logs for dmesg ...
	I0811 01:37:00.766787 1560222 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0811 01:37:00.783797 1560222 logs.go:123] Gathering logs for describe nodes ...
	I0811 01:37:00.783825 1560222 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0811 01:37:00.854847 1560222 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0811 01:37:00.854869 1560222 logs.go:123] Gathering logs for Docker ...
	I0811 01:37:00.854879 1560222 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0811 01:37:00.874412 1560222 logs.go:123] Gathering logs for container status ...
	I0811 01:37:00.874439 1560222 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0811 01:37:01.911961 1560222 ssh_runner.go:189] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (1.037503905s)
	W0811 01:37:01.912084 1560222 out.go:371] Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.14.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.8.0-1041-aws
	DOCKER_VERSION: 20.10.7
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.8.0-1041-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	W0811 01:37:01.912109 1560222 out.go:242] * 
	W0811 01:37:01.912272 1560222 out.go:242] X Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.14.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.8.0-1041-aws
	DOCKER_VERSION: 20.10.7
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.8.0-1041-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	
	W0811 01:37:01.912293 1560222 out.go:242] * 
	W0811 01:37:01.914831 1560222 out.go:242] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                                              │
	│                                                                                                                                                            │
	│    * Please attach the following file to the GitHub issue:                                                                                                 │
	│    * - /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/logs/lastStart.txt    │
	│                                                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0811 01:37:01.918160 1560222 out.go:177] X Problems detected in kubelet:
	I0811 01:37:01.920024 1560222 out.go:177]   Aug 11 01:36:52 old-k8s-version-20210811011523-1387367 kubelet[69013]: F0811 01:36:52.633715   69013 kubelet.go:1359] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level BestEffort QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods besteffort]: Failed to set config for supported subsystems : failed to write 4611686018427387904 to hugetlb.64kB.limit_in_bytes: open /sys/fs/cgroup/hugetlb/kubepods/besteffort/hugetlb.64kB.limit_in_bytes: permission denied
	I0811 01:37:01.922272 1560222 out.go:177]   Aug 11 01:36:54 old-k8s-version-20210811011523-1387367 kubelet[69218]: F0811 01:36:54.420969   69218 kubelet.go:1359] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level Burstable QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods burstable]: Failed to set config for supported subsystems : failed to write 4611686018427387904 to hugetlb.64kB.limit_in_bytes: open /sys/fs/cgroup/hugetlb/kubepods/burstable/hugetlb.64kB.limit_in_bytes: permission denied
	I0811 01:37:01.924783 1560222 out.go:177]   Aug 11 01:36:56 old-k8s-version-20210811011523-1387367 kubelet[69428]: F0811 01:36:56.035361   69428 kubelet.go:1359] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level Burstable QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods burstable]: Failed to set config for supported subsystems : failed to write 4611686018427387904 to hugetlb.64kB.limit_in_bytes: open /sys/fs/cgroup/hugetlb/kubepods/burstable/hugetlb.64kB.limit_in_bytes: permission denied
	I0811 01:37:01.929160 1560222 out.go:177] 
	W0811 01:37:01.929423 1560222 out.go:242] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.14.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.8.0-1041-aws
	DOCKER_VERSION: 20.10.7
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.8.0-1041-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	
	W0811 01:37:01.929871 1560222 out.go:242] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0811 01:37:01.929965 1560222 out.go:242] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2021-08-11 01:23:39 UTC, end at Wed 2021-08-11 01:37:03 UTC. --
	Aug 11 01:25:51 old-k8s-version-20210811011523-1387367 dockerd[205]: time="2021-08-11T01:25:51.870015867Z" level=warning msg="Error getting v2 registry: Get https://fake.domain/v2/: dial tcp: lookup fake.domain: no such host"
	Aug 11 01:25:51 old-k8s-version-20210811011523-1387367 dockerd[205]: time="2021-08-11T01:25:51.870060363Z" level=info msg="Attempting next endpoint for pull after error: Get https://fake.domain/v2/: dial tcp: lookup fake.domain: no such host"
	Aug 11 01:25:51 old-k8s-version-20210811011523-1387367 dockerd[205]: time="2021-08-11T01:25:51.885895617Z" level=error msg="Handler for POST /v1.38/images/create returned error: Get https://fake.domain/v2/: dial tcp: lookup fake.domain: no such host"
	Aug 11 01:27:24 old-k8s-version-20210811011523-1387367 dockerd[205]: time="2021-08-11T01:27:24.903541231Z" level=warning msg="Error getting v2 registry: Get https://fake.domain/v2/: dial tcp: lookup fake.domain: no such host"
	Aug 11 01:27:24 old-k8s-version-20210811011523-1387367 dockerd[205]: time="2021-08-11T01:27:24.903586482Z" level=info msg="Attempting next endpoint for pull after error: Get https://fake.domain/v2/: dial tcp: lookup fake.domain: no such host"
	Aug 11 01:27:24 old-k8s-version-20210811011523-1387367 dockerd[205]: time="2021-08-11T01:27:24.938083080Z" level=error msg="Handler for POST /v1.38/images/create returned error: Get https://fake.domain/v2/: dial tcp: lookup fake.domain: no such host"
	Aug 11 01:28:46 old-k8s-version-20210811011523-1387367 dockerd[205]: time="2021-08-11T01:28:46.436301577Z" level=info msg="ignoring event" container=b0009bba953290df054eb200f1c6648c969ba932290a3b18bba86b0592658c17 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 01:28:46 old-k8s-version-20210811011523-1387367 dockerd[205]: time="2021-08-11T01:28:46.618006819Z" level=info msg="ignoring event" container=b90446b631963e50bef6a8a75a2a15d7a9ffff84261302a2bb96e5f11c4ced84 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 01:28:46 old-k8s-version-20210811011523-1387367 dockerd[205]: time="2021-08-11T01:28:46.762705833Z" level=info msg="ignoring event" container=7cff22af5236a77397c7e7eeacbfa78789f22b94b78019ac5dfcd3facc0bb2e8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 01:28:46 old-k8s-version-20210811011523-1387367 dockerd[205]: time="2021-08-11T01:28:46.994643169Z" level=info msg="ignoring event" container=3860fcb8a51501d26cdce4211fd18a42572b2c892f9c87405de72bd0cbf2f8bc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 01:28:47 old-k8s-version-20210811011523-1387367 dockerd[205]: time="2021-08-11T01:28:47.242630986Z" level=info msg="ignoring event" container=df7f7346c1a581282e5739d9eed6f24c8fab35f05c99b7a6975bba0b2cd6c24d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 01:28:47 old-k8s-version-20210811011523-1387367 dockerd[205]: time="2021-08-11T01:28:47.381704182Z" level=info msg="ignoring event" container=3f9eebfaea242c9341d72f32270da84f72d2e901a09515f0cabf7e95b1ffff7a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 01:28:47 old-k8s-version-20210811011523-1387367 dockerd[205]: time="2021-08-11T01:28:47.553688353Z" level=info msg="ignoring event" container=55095c59e1f79aeb336fb4f44f820b26e94e0adc1a4fb56c0d25cdb1855af32d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 01:28:47 old-k8s-version-20210811011523-1387367 dockerd[205]: time="2021-08-11T01:28:47.786162375Z" level=info msg="ignoring event" container=c752dfd714905714e39d391be52164555ebe6ee7ef189700544b54d3eccb9b2c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 01:28:47 old-k8s-version-20210811011523-1387367 dockerd[205]: time="2021-08-11T01:28:47.952604740Z" level=info msg="ignoring event" container=6e530d8b48abc859fa9edd24b02f7a9f53b9b9afdd1ebcb36ae58f9bbfb30d59 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 01:28:48 old-k8s-version-20210811011523-1387367 dockerd[205]: time="2021-08-11T01:28:48.206372811Z" level=info msg="ignoring event" container=dc64339803c555dbb29e0fe78fc3a7a221a73bc9c401d00f52d17f6cb3799ca5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 01:28:48 old-k8s-version-20210811011523-1387367 dockerd[205]: time="2021-08-11T01:28:48.447022969Z" level=info msg="ignoring event" container=6fd8b20389eef48089c8d80a20e5c146562db66ccb85398942cfe088a623b26d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 01:28:48 old-k8s-version-20210811011523-1387367 dockerd[205]: time="2021-08-11T01:28:48.655115831Z" level=info msg="ignoring event" container=ed8e26d1b95bce2c70563bf48b69b5633d1dde1c3f78a270c36ab6a22494001f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 01:28:48 old-k8s-version-20210811011523-1387367 dockerd[205]: time="2021-08-11T01:28:48.796142089Z" level=info msg="ignoring event" container=d81ef106b7f252ec3481ed19ad0371d94b6f503f3f5be5af3e1db520e45e58d8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 01:28:48 old-k8s-version-20210811011523-1387367 dockerd[205]: time="2021-08-11T01:28:48.955646957Z" level=info msg="ignoring event" container=db7f849fbee64476ec1977de28957629089fbaa09c5ff9db1711e01720c9d231 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 01:28:49 old-k8s-version-20210811011523-1387367 dockerd[205]: time="2021-08-11T01:28:49.099985505Z" level=info msg="ignoring event" container=647e6f4b64a6bbb25d1cbcff193c1c31b6d22fe0d0fdee32c81c3c1afb6f9bbe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 01:28:49 old-k8s-version-20210811011523-1387367 dockerd[205]: time="2021-08-11T01:28:49.217629161Z" level=info msg="ignoring event" container=f3f08540d15275d79b84fb7ec21d48bf72ab04eb0f310c79f8f13c2449095d19 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 01:28:49 old-k8s-version-20210811011523-1387367 dockerd[205]: time="2021-08-11T01:28:49.383454480Z" level=info msg="ignoring event" container=ced087fa8833d50c92c3dcb0e9967772638e60d8637699901b48b17c4e2cf1d1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 01:28:49 old-k8s-version-20210811011523-1387367 dockerd[205]: time="2021-08-11T01:28:49.522846329Z" level=info msg="ignoring event" container=a99aefb77ed4a374c849fc1ff0f6f463f088407db4a95e83da76a5f6e63b41e8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 01:28:49 old-k8s-version-20210811011523-1387367 dockerd[205]: time="2021-08-11T01:28:49.702531964Z" level=info msg="ignoring event" container=16ecd8d66225eb19f0a42ffa9723a4fe995695367151d9e03a7f719fdd375a54 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2021-08-11T01:37:05Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.001093] FS-Cache: O-key=[8] '38a8010000000000'
	[  +0.000822] FS-Cache: N-cookie c=00000000aef8ae5b [p=00000000d00a921d fl=2 nc=0 na=1]
	[  +0.001337] FS-Cache: N-cookie d=00000000d0f41ca1 n=00000000cf2b9e77
	[  +0.001079] FS-Cache: N-key=[8] '38a8010000000000'
	[  +0.008061] FS-Cache: Duplicate cookie detected
	[  +0.000824] FS-Cache: O-cookie c=000000009e8af87d [p=00000000d00a921d fl=226 nc=0 na=1]
	[  +0.001345] FS-Cache: O-cookie d=00000000d0f41ca1 n=00000000882d24dd
	[  +0.001078] FS-Cache: O-key=[8] '38a8010000000000'
	[  +0.000828] FS-Cache: N-cookie c=00000000aef8ae5b [p=00000000d00a921d fl=2 nc=0 na=1]
	[  +0.001344] FS-Cache: N-cookie d=00000000d0f41ca1 n=000000006ce4882d
	[  +0.001069] FS-Cache: N-key=[8] '38a8010000000000'
	[  +1.509820] FS-Cache: Duplicate cookie detected
	[  +0.000799] FS-Cache: O-cookie c=00000000e1eedaf3 [p=00000000d00a921d fl=226 nc=0 na=1]
	[  +0.001318] FS-Cache: O-cookie d=00000000d0f41ca1 n=0000000025fbee24
	[  +0.001053] FS-Cache: O-key=[8] '37a8010000000000'
	[  +0.000829] FS-Cache: N-cookie c=000000006f83a19d [p=00000000d00a921d fl=2 nc=0 na=1]
	[  +0.001316] FS-Cache: N-cookie d=00000000d0f41ca1 n=00000000d322ea0c
	[  +0.001048] FS-Cache: N-key=[8] '37a8010000000000'
	[  +0.277640] FS-Cache: Duplicate cookie detected
	[  +0.000818] FS-Cache: O-cookie c=000000007ae3c387 [p=00000000d00a921d fl=226 nc=0 na=1]
	[  +0.001327] FS-Cache: O-cookie d=00000000d0f41ca1 n=000000004bd4688e
	[  +0.001069] FS-Cache: O-key=[8] '3ca8010000000000'
	[  +0.000853] FS-Cache: N-cookie c=0000000007642642 [p=00000000d00a921d fl=2 nc=0 na=1]
	[  +0.001309] FS-Cache: N-cookie d=00000000d0f41ca1 n=00000000ae88504f
	[  +0.001071] FS-Cache: N-key=[8] '3ca8010000000000'
	
	* 
	* ==> kernel <==
	*  01:37:05 up 11:19,  0 users,  load average: 3.42, 3.20, 2.88
	Linux old-k8s-version-20210811011523-1387367 5.8.0-1041-aws #43~20.04.1-Ubuntu SMP Thu Jul 15 11:03:27 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2021-08-11 01:23:39 UTC, end at Wed 2021-08-11 01:37:05 UTC. --
	Aug 11 01:37:04 old-k8s-version-20210811011523-1387367 kubelet[70763]: I0811 01:37:04.749279   70763 desired_state_of_world_populator.go:130] Desired state populator starts to run
	Aug 11 01:37:04 old-k8s-version-20210811011523-1387367 kubelet[70763]: E0811 01:37:04.752464   70763 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: Get https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0: dial tcp 192.168.58.2:8443: connect: connection refused
	Aug 11 01:37:04 old-k8s-version-20210811011523-1387367 kubelet[70763]: E0811 01:37:04.753154   70763 controller.go:115] failed to ensure node lease exists, will retry in 200ms, error: Get https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/old-k8s-version-20210811011523-1387367?timeout=10s: dial tcp 192.168.58.2:8443: connect: connection refused
	Aug 11 01:37:04 old-k8s-version-20210811011523-1387367 kubelet[70763]: I0811 01:37:04.825933   70763 clientconn.go:440] parsed scheme: "unix"
	Aug 11 01:37:04 old-k8s-version-20210811011523-1387367 kubelet[70763]: I0811 01:37:04.826171   70763 clientconn.go:440] scheme "unix" not registered, fallback to default scheme
	Aug 11 01:37:04 old-k8s-version-20210811011523-1387367 kubelet[70763]: I0811 01:37:04.826303   70763 asm_arm64.s:1128] ccResolverWrapper: sending new addresses to cc: [{unix:///run/containerd/containerd.sock 0  <nil>}]
	Aug 11 01:37:04 old-k8s-version-20210811011523-1387367 kubelet[70763]: I0811 01:37:04.826392   70763 clientconn.go:796] ClientConn switching balancer to "pick_first"
	Aug 11 01:37:04 old-k8s-version-20210811011523-1387367 kubelet[70763]: I0811 01:37:04.826511   70763 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0x40005f46a0, CONNECTING
	Aug 11 01:37:04 old-k8s-version-20210811011523-1387367 kubelet[70763]: I0811 01:37:04.826799   70763 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0x40005f46a0, READY
	Aug 11 01:37:04 old-k8s-version-20210811011523-1387367 kubelet[70763]: E0811 01:37:04.856450   70763 kubelet.go:2244] node "old-k8s-version-20210811011523-1387367" not found
	Aug 11 01:37:04 old-k8s-version-20210811011523-1387367 kubelet[70763]: I0811 01:37:04.856493   70763 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach
	Aug 11 01:37:04 old-k8s-version-20210811011523-1387367 kubelet[70763]: I0811 01:37:04.864258   70763 kubelet.go:1823] skipping pod synchronization - container runtime status check may not have completed yet.
	Aug 11 01:37:04 old-k8s-version-20210811011523-1387367 kubelet[70763]: I0811 01:37:04.973622   70763 kubelet_node_status.go:72] Attempting to register node old-k8s-version-20210811011523-1387367
	Aug 11 01:37:04 old-k8s-version-20210811011523-1387367 kubelet[70763]: E0811 01:37:04.979483   70763 kubelet.go:2244] node "old-k8s-version-20210811011523-1387367" not found
	Aug 11 01:37:04 old-k8s-version-20210811011523-1387367 kubelet[70763]: E0811 01:37:04.979717   70763 kubelet_node_status.go:94] Unable to register node "old-k8s-version-20210811011523-1387367" with API server: Post https://control-plane.minikube.internal:8443/api/v1/nodes: dial tcp 192.168.58.2:8443: connect: connection refused
	Aug 11 01:37:04 old-k8s-version-20210811011523-1387367 kubelet[70763]: E0811 01:37:04.979866   70763 controller.go:115] failed to ensure node lease exists, will retry in 400ms, error: Get https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/old-k8s-version-20210811011523-1387367?timeout=10s: dial tcp 192.168.58.2:8443: connect: connection refused
	Aug 11 01:37:05 old-k8s-version-20210811011523-1387367 kubelet[70763]: I0811 01:37:05.018214   70763 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach
	Aug 11 01:37:05 old-k8s-version-20210811011523-1387367 kubelet[70763]: I0811 01:37:05.065206   70763 kubelet.go:1823] skipping pod synchronization - container runtime status check may not have completed yet.
	Aug 11 01:37:05 old-k8s-version-20210811011523-1387367 kubelet[70763]: E0811 01:37:05.079760   70763 kubelet.go:2244] node "old-k8s-version-20210811011523-1387367" not found
	Aug 11 01:37:05 old-k8s-version-20210811011523-1387367 kubelet[70763]: I0811 01:37:05.090990   70763 cpu_manager.go:155] [cpumanager] starting with none policy
	Aug 11 01:37:05 old-k8s-version-20210811011523-1387367 kubelet[70763]: I0811 01:37:05.091008   70763 cpu_manager.go:156] [cpumanager] reconciling every 10s
	Aug 11 01:37:05 old-k8s-version-20210811011523-1387367 kubelet[70763]: I0811 01:37:05.091015   70763 policy_none.go:42] [cpumanager] none policy: Start
	Aug 11 01:37:05 old-k8s-version-20210811011523-1387367 kubelet[70763]: F0811 01:37:05.092286   70763 kubelet.go:1359] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level Burstable QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods burstable]: Failed to set config for supported subsystems : failed to write 4611686018427387904 to hugetlb.64kB.limit_in_bytes: open /sys/fs/cgroup/hugetlb/kubepods/burstable/hugetlb.64kB.limit_in_bytes: permission denied
	Aug 11 01:37:05 old-k8s-version-20210811011523-1387367 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Aug 11 01:37:05 old-k8s-version-20210811011523-1387367 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0811 01:37:05.336486 1667068 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (807.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.79s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
E0811 01:37:24.126394 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/client.crt: no such file or directory
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
E0811 01:37:41.079785 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/client.crt: no such file or directory
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
E0811 01:38:04.808113 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210811004603-1387367/client.crt: no such file or directory
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
E0811 01:39:12.386705 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/no-preload-20210811012751-1387367/client.crt: no such file or directory
E0811 01:39:12.392005 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/no-preload-20210811012751-1387367/client.crt: no such file or directory
E0811 01:39:12.402613 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/no-preload-20210811012751-1387367/client.crt: no such file or directory
E0811 01:39:12.423095 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/no-preload-20210811012751-1387367/client.crt: no such file or directory
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
E0811 01:39:12.463787 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/no-preload-20210811012751-1387367/client.crt: no such file or directory
E0811 01:39:12.544042 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/no-preload-20210811012751-1387367/client.crt: no such file or directory
E0811 01:39:12.704368 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/no-preload-20210811012751-1387367/client.crt: no such file or directory
E0811 01:39:13.024881 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/no-preload-20210811012751-1387367/client.crt: no such file or directory
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
E0811 01:39:13.665839 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/no-preload-20210811012751-1387367/client.crt: no such file or directory
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
E0811 01:39:14.946436 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/no-preload-20210811012751-1387367/client.crt: no such file or directory
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
E0811 01:39:17.507325 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/no-preload-20210811012751-1387367/client.crt: no such file or directory
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
E0811 01:39:22.627907 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/no-preload-20210811012751-1387367/client.crt: no such file or directory
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
E0811 01:39:32.868099 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/no-preload-20210811012751-1387367/client.crt: no such file or directory
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
E0811 01:39:53.348287 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/no-preload-20210811012751-1387367/client.crt: no such file or directory
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
E0811 01:40:34.308530 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/no-preload-20210811012751-1387367/client.crt: no such file or directory
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
E0811 01:41:56.228735 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/no-preload-20210811012751-1387367/client.crt: no such file or directory
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
E0811 01:42:41.080379 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/client.crt: no such file or directory
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
E0811 01:42:47.855043 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210811004603-1387367/client.crt: no such file or directory
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
E0811 01:43:04.813080 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210811004603-1387367/client.crt: no such file or directory
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
E0811 01:44:12.387695 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/no-preload-20210811012751-1387367/client.crt: no such file or directory
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
E0811 01:44:40.069121 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/no-preload-20210811012751-1387367/client.crt: no such file or directory
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.58.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.58.2:8443: connect: connection refused
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:325: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
start_stop_delete_test.go:247: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-20210811011523-1387367 -n old-k8s-version-20210811011523-1387367
start_stop_delete_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-20210811011523-1387367 -n old-k8s-version-20210811011523-1387367: exit status 2 (426.766461ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:247: status error: exit status 2 (may be ok)
start_stop_delete_test.go:247: "old-k8s-version-20210811011523-1387367" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:248: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect old-k8s-version-20210811011523-1387367
helpers_test.go:236: (dbg) docker inspect old-k8s-version-20210811011523-1387367:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bba6c5a68bfe92f88e2496a453e876e1b53a0cf07478dc7bff0624ff71022063",
	        "Created": "2021-08-11T01:15:25.302959325Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1560408,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-11T01:23:38.623423696Z",
	            "FinishedAt": "2021-08-11T01:23:37.413617456Z"
	        },
	        "Image": "sha256:ba5ae658d5b3f017bdb597cc46a1912d5eed54239e31b777788d204fdcbc4445",
	        "ResolvConfPath": "/var/lib/docker/containers/bba6c5a68bfe92f88e2496a453e876e1b53a0cf07478dc7bff0624ff71022063/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bba6c5a68bfe92f88e2496a453e876e1b53a0cf07478dc7bff0624ff71022063/hostname",
	        "HostsPath": "/var/lib/docker/containers/bba6c5a68bfe92f88e2496a453e876e1b53a0cf07478dc7bff0624ff71022063/hosts",
	        "LogPath": "/var/lib/docker/containers/bba6c5a68bfe92f88e2496a453e876e1b53a0cf07478dc7bff0624ff71022063/bba6c5a68bfe92f88e2496a453e876e1b53a0cf07478dc7bff0624ff71022063-json.log",
	        "Name": "/old-k8s-version-20210811011523-1387367",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20210811011523-1387367:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20210811011523-1387367",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/6c8f21f19a48e487593849b1d06a20a9bf9142e37e95ec4707bc3765f7e049ef-init/diff:/var/lib/docker/overlay2/b901673749d4c23cf617379d66c43acbc184f898f580a05fca5568725e6ccb6a/diff:/var/lib/docker/overlay2/3fd19ee2c9d46b2cdb8a592d42d57d9efdba3a556c98f5018ae07caa15606bc4/diff:/var/lib/docker/overlay2/31f547e426e6dfa6ed65e0b7cb851c18e771f23a77868552685aacb2e126dc0a/diff:/var/lib/docker/overlay2/6ae53b304b800757235653c63c7879ae7f05b4d4f0400f7f6fadc53e2059aa5a/diff:/var/lib/docker/overlay2/7702d6ed068e8b454dd11af18cb8cb76986898926e3e3130c2d7f638062de9ee/diff:/var/lib/docker/overlay2/e67b0ce82f4d6c092698530106fa38495aa54b2fe5600ac022386a3d17165948/diff:/var/lib/docker/overlay2/d3ddbdbbe88f3c5a0867637eeb78a22790daa833a6179cdd4690044007911336/diff:/var/lib/docker/overlay2/10c48536a5187dfe63f1c090ec32daef76e852de7cc4a7e7f96a2fa1510314cc/diff:/var/lib/docker/overlay2/2186c26bc131feb045ca64a28e2cc431fed76b32afc3d3587916b98a9af807fe/diff:/var/lib/docker/overlay2/292c9d
aaf6d60ee235c7ac65bfc1b61b9c0d360ebbebcf08ba5efeb1b40de075/diff:/var/lib/docker/overlay2/9bc521e84afeeb62fa312e9eb2afc367bc449dbf66f412e17eb2338f79d6f920/diff:/var/lib/docker/overlay2/b1a93cf97438f068af56026fc52aaa329c46e4cac3d8f91c8d692871adaf451a/diff:/var/lib/docker/overlay2/b8e42d5d9e69e72a11e3cad660b9f29335dfc6cd1b4a6aebdbf5e6f313efe749/diff:/var/lib/docker/overlay2/6a6eaef3ce06d941ce606aaebc530878ce54d24a51c7947ca936a3a6eb4dac16/diff:/var/lib/docker/overlay2/62370bd2a6e35ce796647f79ccf9906147c91e8ceee31e401bdb7842371c6bee/diff:/var/lib/docker/overlay2/e673dacc1c6815100340b85af47aeb90eb5fca87778caec1d728de5b8cc9a36e/diff:/var/lib/docker/overlay2/bd17ea1d8cd8e2f88bd7fb4cee8a097365f6b81efc91f203a0504873fc0916a6/diff:/var/lib/docker/overlay2/d2f15007a2a5c037903647e5dd0d6882903fa163d23087bbd8eadeaf3618377b/diff:/var/lib/docker/overlay2/0bbc7fe1b1d62a2db9b4f402e6bc8781815951ae6df608307fd50a2fde242253/diff:/var/lib/docker/overlay2/d124fa0a0ea67ad0362eec0adf1f3e7cbd885b2cf4c31f83e917d97a09a791af/diff:/var/lib/d
ocker/overlay2/ee74e2f91490ecb544a95b306f1001046f3c4656413878d09be8bf67de7b4c4f/diff:/var/lib/docker/overlay2/4279b3790ea6aeb262c4ecd9cf4aae5beb1430f4fbb599b49ff27d0f7b3a9714/diff:/var/lib/docker/overlay2/b7fd6a0c88249dbf5e233463fbe08559ca287465617e7721977a002204ea3af5/diff:/var/lib/docker/overlay2/c495a83eeda1cf6df33d49341ee01f15738845e6330c0a5b3c29e11fdc4733b0/diff:/var/lib/docker/overlay2/ac747f0260d49943953568bbbe150f3a4f28d70bd82f40d0485ef13b12195044/diff:/var/lib/docker/overlay2/aa98d62ac831ecd60bc1acfa1708c0648c306bb7fa187026b472e9ae5c3364a4/diff:/var/lib/docker/overlay2/34829b132a53df856a1be03aa46565640e20cb075db18bd9775a5055fe0c0b22/diff:/var/lib/docker/overlay2/85a074fe6f79f3ea9d8b2f628355f41bb4f73b398257f8b6659bc171d86a0736/diff:/var/lib/docker/overlay2/c8c145d2e68e655880cd5c8fae8cb9f7cbd6b112f1f64fced224b17d4f60fbc7/diff:/var/lib/docker/overlay2/7480ad16aa2479be3569dd07eca685bc3a37a785e7ff281c448c7ca718cc67c3/diff:/var/lib/docker/overlay2/519f1304b1b8ee2daf8c1b9411f3e46d4fedacc8d6446937321372c4e8d
f2cb9/diff:/var/lib/docker/overlay2/246fcb20bef1dbfdc41186d1b7143566cd571a067830cc3f946b232024c2e85c/diff:/var/lib/docker/overlay2/f5f15e6d497abc56d9a2d901ed821a56e6f3effe2fc8d6c3ef64297faea15179/diff:/var/lib/docker/overlay2/3aa1fb1105e860c53ef63317f6757f9629a4a20f35764d976df2b0f0cee5d4f2/diff:/var/lib/docker/overlay2/765f7cba41acbb266d2cef89f2a76a5659b78c3b075223bf23257ac44acfe177/diff:/var/lib/docker/overlay2/53179410fe05d9ddea0a22ba2c123ca8e75f9c7839c2a64902e411e2bda2de23/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6c8f21f19a48e487593849b1d06a20a9bf9142e37e95ec4707bc3765f7e049ef/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6c8f21f19a48e487593849b1d06a20a9bf9142e37e95ec4707bc3765f7e049ef/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6c8f21f19a48e487593849b1d06a20a9bf9142e37e95ec4707bc3765f7e049ef/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20210811011523-1387367",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20210811011523-1387367/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20210811011523-1387367",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20210811011523-1387367",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20210811011523-1387367",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f2f03e3515d97b5a3e58e944c50f46d22ccfe797d00d1d40e45726faca300bb0",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50395"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50394"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50391"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50393"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50392"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/f2f03e3515d9",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20210811011523-1387367": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "bba6c5a68bfe",
	                        "old-k8s-version-20210811011523-1387367"
	                    ],
	                    "NetworkID": "f9263d0eb8195b3d75e38102721f923423021af8bf484b8d2a95b3aadb987266",
	                    "EndpointID": "24cd56ecd9abff4711f1582fd2589536026e1fc6ee24a461fe2bbae8bb5905b8",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-20210811011523-1387367 -n old-k8s-version-20210811011523-1387367
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-20210811011523-1387367 -n old-k8s-version-20210811011523-1387367: exit status 2 (507.838429ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-20210811011523-1387367 logs -n 25
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 -p old-k8s-version-20210811011523-1387367 logs -n 25: exit status 110 (2.743793403s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|---------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                      Profile                      |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|---------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| start   | -p                                                | no-preload-20210811012751-1387367                 | jenkins | v1.22.0 | Wed, 11 Aug 2021 01:27:51 UTC | Wed, 11 Aug 2021 01:29:11 UTC |
	|         | no-preload-20210811012751-1387367                 |                                                   |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                   |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                                   |         |         |                               |                               |
	|         | --driver=docker                                   |                                                   |         |         |                               |                               |
	|         | --container-runtime=docker                        |                                                   |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                                   |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | no-preload-20210811012751-1387367                 | jenkins | v1.22.0 | Wed, 11 Aug 2021 01:29:21 UTC | Wed, 11 Aug 2021 01:29:22 UTC |
	|         | no-preload-20210811012751-1387367                 |                                                   |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                   |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                   |         |         |                               |                               |
	| stop    | -p                                                | no-preload-20210811012751-1387367                 | jenkins | v1.22.0 | Wed, 11 Aug 2021 01:29:22 UTC | Wed, 11 Aug 2021 01:29:34 UTC |
	|         | no-preload-20210811012751-1387367                 |                                                   |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                   |         |         |                               |                               |
	| addons  | enable dashboard -p                               | no-preload-20210811012751-1387367                 | jenkins | v1.22.0 | Wed, 11 Aug 2021 01:29:34 UTC | Wed, 11 Aug 2021 01:29:34 UTC |
	|         | no-preload-20210811012751-1387367                 |                                                   |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                   |         |         |                               |                               |
	| start   | -p                                                | no-preload-20210811012751-1387367                 | jenkins | v1.22.0 | Wed, 11 Aug 2021 01:29:34 UTC | Wed, 11 Aug 2021 01:35:25 UTC |
	|         | no-preload-20210811012751-1387367                 |                                                   |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                   |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                                   |         |         |                               |                               |
	|         | --driver=docker                                   |                                                   |         |         |                               |                               |
	|         | --container-runtime=docker                        |                                                   |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                                   |         |         |                               |                               |
	| ssh     | -p                                                | no-preload-20210811012751-1387367                 | jenkins | v1.22.0 | Wed, 11 Aug 2021 01:35:43 UTC | Wed, 11 Aug 2021 01:35:43 UTC |
	|         | no-preload-20210811012751-1387367                 |                                                   |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                   |         |         |                               |                               |
	| pause   | -p                                                | no-preload-20210811012751-1387367                 | jenkins | v1.22.0 | Wed, 11 Aug 2021 01:35:43 UTC | Wed, 11 Aug 2021 01:35:44 UTC |
	|         | no-preload-20210811012751-1387367                 |                                                   |         |         |                               |                               |
	|         | --alsologtostderr -v=1                            |                                                   |         |         |                               |                               |
	| unpause | -p                                                | no-preload-20210811012751-1387367                 | jenkins | v1.22.0 | Wed, 11 Aug 2021 01:35:45 UTC | Wed, 11 Aug 2021 01:35:46 UTC |
	|         | no-preload-20210811012751-1387367                 |                                                   |         |         |                               |                               |
	|         | --alsologtostderr -v=1                            |                                                   |         |         |                               |                               |
	| delete  | -p                                                | no-preload-20210811012751-1387367                 | jenkins | v1.22.0 | Wed, 11 Aug 2021 01:35:47 UTC | Wed, 11 Aug 2021 01:35:49 UTC |
	|         | no-preload-20210811012751-1387367                 |                                                   |         |         |                               |                               |
	| delete  | -p                                                | no-preload-20210811012751-1387367                 | jenkins | v1.22.0 | Wed, 11 Aug 2021 01:35:50 UTC | Wed, 11 Aug 2021 01:35:50 UTC |
	|         | no-preload-20210811012751-1387367                 |                                                   |         |         |                               |                               |
	| start   | -p                                                | embed-certs-20210811013550-1387367                | jenkins | v1.22.0 | Wed, 11 Aug 2021 01:35:50 UTC | Wed, 11 Aug 2021 01:37:03 UTC |
	|         | embed-certs-20210811013550-1387367                |                                                   |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                   |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                                   |         |         |                               |                               |
	|         | --driver=docker                                   |                                                   |         |         |                               |                               |
	|         | --container-runtime=docker                        |                                                   |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                   |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | embed-certs-20210811013550-1387367                | jenkins | v1.22.0 | Wed, 11 Aug 2021 01:37:12 UTC | Wed, 11 Aug 2021 01:37:13 UTC |
	|         | embed-certs-20210811013550-1387367                |                                                   |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                   |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                   |         |         |                               |                               |
	| stop    | -p                                                | embed-certs-20210811013550-1387367                | jenkins | v1.22.0 | Wed, 11 Aug 2021 01:37:13 UTC | Wed, 11 Aug 2021 01:37:24 UTC |
	|         | embed-certs-20210811013550-1387367                |                                                   |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                   |         |         |                               |                               |
	| addons  | enable dashboard -p                               | embed-certs-20210811013550-1387367                | jenkins | v1.22.0 | Wed, 11 Aug 2021 01:37:25 UTC | Wed, 11 Aug 2021 01:37:25 UTC |
	|         | embed-certs-20210811013550-1387367                |                                                   |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                   |         |         |                               |                               |
	| start   | -p                                                | embed-certs-20210811013550-1387367                | jenkins | v1.22.0 | Wed, 11 Aug 2021 01:37:25 UTC | Wed, 11 Aug 2021 01:43:56 UTC |
	|         | embed-certs-20210811013550-1387367                |                                                   |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                   |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                                   |         |         |                               |                               |
	|         | --driver=docker                                   |                                                   |         |         |                               |                               |
	|         | --container-runtime=docker                        |                                                   |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                   |         |         |                               |                               |
	| ssh     | -p                                                | embed-certs-20210811013550-1387367                | jenkins | v1.22.0 | Wed, 11 Aug 2021 01:44:07 UTC | Wed, 11 Aug 2021 01:44:08 UTC |
	|         | embed-certs-20210811013550-1387367                |                                                   |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                   |         |         |                               |                               |
	| pause   | -p                                                | embed-certs-20210811013550-1387367                | jenkins | v1.22.0 | Wed, 11 Aug 2021 01:44:08 UTC | Wed, 11 Aug 2021 01:44:08 UTC |
	|         | embed-certs-20210811013550-1387367                |                                                   |         |         |                               |                               |
	|         | --alsologtostderr -v=1                            |                                                   |         |         |                               |                               |
	| unpause | -p                                                | embed-certs-20210811013550-1387367                | jenkins | v1.22.0 | Wed, 11 Aug 2021 01:44:09 UTC | Wed, 11 Aug 2021 01:44:10 UTC |
	|         | embed-certs-20210811013550-1387367                |                                                   |         |         |                               |                               |
	|         | --alsologtostderr -v=1                            |                                                   |         |         |                               |                               |
	| delete  | -p                                                | embed-certs-20210811013550-1387367                | jenkins | v1.22.0 | Wed, 11 Aug 2021 01:44:11 UTC | Wed, 11 Aug 2021 01:44:14 UTC |
	|         | embed-certs-20210811013550-1387367                |                                                   |         |         |                               |                               |
	| delete  | -p                                                | embed-certs-20210811013550-1387367                | jenkins | v1.22.0 | Wed, 11 Aug 2021 01:44:14 UTC | Wed, 11 Aug 2021 01:44:14 UTC |
	|         | embed-certs-20210811013550-1387367                |                                                   |         |         |                               |                               |
	| delete  | -p                                                | disable-driver-mounts-20210811014414-1387367      | jenkins | v1.22.0 | Wed, 11 Aug 2021 01:44:14 UTC | Wed, 11 Aug 2021 01:44:15 UTC |
	|         | disable-driver-mounts-20210811014414-1387367      |                                                   |         |         |                               |                               |
	| start   | -p                                                | default-k8s-different-port-20210811014415-1387367 | jenkins | v1.22.0 | Wed, 11 Aug 2021 01:44:15 UTC | Wed, 11 Aug 2021 01:45:24 UTC |
	|         | default-k8s-different-port-20210811014415-1387367 |                                                   |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                   |         |         |                               |                               |
	|         | --wait=true --apiserver-port=8444                 |                                                   |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=docker       |                                                   |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                   |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20210811014415-1387367 | jenkins | v1.22.0 | Wed, 11 Aug 2021 01:45:34 UTC | Wed, 11 Aug 2021 01:45:35 UTC |
	|         | default-k8s-different-port-20210811014415-1387367 |                                                   |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                   |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                   |         |         |                               |                               |
	| stop    | -p                                                | default-k8s-different-port-20210811014415-1387367 | jenkins | v1.22.0 | Wed, 11 Aug 2021 01:45:35 UTC | Wed, 11 Aug 2021 01:45:46 UTC |
	|         | default-k8s-different-port-20210811014415-1387367 |                                                   |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                   |         |         |                               |                               |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20210811014415-1387367 | jenkins | v1.22.0 | Wed, 11 Aug 2021 01:45:47 UTC | Wed, 11 Aug 2021 01:45:47 UTC |
	|         | default-k8s-different-port-20210811014415-1387367 |                                                   |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                   |         |         |                               |                               |
	|---------|---------------------------------------------------|---------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/11 01:45:47
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.16.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0811 01:45:47.180945 1753749 out.go:298] Setting OutFile to fd 1 ...
	I0811 01:45:47.181060 1753749 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0811 01:45:47.181067 1753749 out.go:311] Setting ErrFile to fd 2...
	I0811 01:45:47.181071 1753749 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0811 01:45:47.181200 1753749 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/bin
	I0811 01:45:47.181445 1753749 out.go:305] Setting JSON to false
	I0811 01:45:47.182424 1753749 start.go:111] hostinfo: {"hostname":"ip-172-31-31-251","uptime":41294,"bootTime":1628605053,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0811 01:45:47.182513 1753749 start.go:121] virtualization:  
	I0811 01:45:47.186028 1753749 out.go:177] * [default-k8s-different-port-20210811014415-1387367] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	I0811 01:45:47.188346 1753749 out.go:177]   - MINIKUBE_LOCATION=12230
	I0811 01:45:47.186197 1753749 notify.go:169] Checking for updates...
	I0811 01:45:47.190619 1753749 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	I0811 01:45:47.192959 1753749 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube
	I0811 01:45:47.195511 1753749 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0811 01:45:47.196501 1753749 driver.go:335] Setting default libvirt URI to qemu:///system
	I0811 01:45:47.259724 1753749 docker.go:132] docker version: linux-20.10.8
	I0811 01:45:47.259832 1753749 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0811 01:45:47.420712 1753749 info.go:263] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:46 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:39 SystemTime:2021-08-11 01:45:47.306347041 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226263040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0811 01:45:47.420876 1753749 docker.go:244] overlay module found
	I0811 01:45:47.424689 1753749 out.go:177] * Using the docker driver based on existing profile
	I0811 01:45:47.424760 1753749 start.go:278] selected driver: docker
	I0811 01:45:47.424778 1753749 start.go:751] validating driver "docker" against &{Name:default-k8s-different-port-20210811014415-1387367 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:default-k8s-different-port-20210811014415-1387367 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: N
etwork: MultiNodeRequested:false ExtraDisks:0}
	I0811 01:45:47.424913 1753749 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0811 01:45:47.424963 1753749 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0811 01:45:47.425002 1753749 out.go:242] ! Your cgroup does not allow setting memory.
	I0811 01:45:47.427599 1753749 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0811 01:45:47.428034 1753749 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0811 01:45:47.565199 1753749 info.go:263] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:46 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:39 SystemTime:2021-08-11 01:45:47.485301604 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226263040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	W0811 01:45:47.565325 1753749 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0811 01:45:47.565343 1753749 out.go:242] ! Your cgroup does not allow setting memory.
	I0811 01:45:47.567483 1753749 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0811 01:45:47.567583 1753749 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0811 01:45:47.567604 1753749 cni.go:93] Creating CNI manager for ""
	I0811 01:45:47.567611 1753749 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0811 01:45:47.567619 1753749 start_flags.go:277] config:
	{Name:default-k8s-different-port-20210811014415-1387367 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:default-k8s-different-port-20210811014415-1387367 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0811 01:45:47.569869 1753749 out.go:177] * Starting control plane node default-k8s-different-port-20210811014415-1387367 in cluster default-k8s-different-port-20210811014415-1387367
	I0811 01:45:47.569902 1753749 cache.go:117] Beginning downloading kic base image for docker with docker
	I0811 01:45:47.573074 1753749 out.go:177] * Pulling base image ...
	I0811 01:45:47.573112 1753749 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime docker
	I0811 01:45:47.573159 1753749 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-docker-overlay2-arm64.tar.lz4
	I0811 01:45:47.573167 1753749 cache.go:56] Caching tarball of preloaded images
	I0811 01:45:47.573325 1753749 preload.go:173] Found /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0811 01:45:47.573342 1753749 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on docker
	I0811 01:45:47.573457 1753749 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/default-k8s-different-port-20210811014415-1387367/config.json ...
	I0811 01:45:47.573647 1753749 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	I0811 01:45:47.637471 1753749 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0811 01:45:47.637494 1753749 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0811 01:45:47.637507 1753749 cache.go:205] Successfully downloaded all kic artifacts
	I0811 01:45:47.637545 1753749 start.go:313] acquiring machines lock for default-k8s-different-port-20210811014415-1387367: {Name:mkb1f4702a7b36f12a58195bff46b6ec9c16799d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0811 01:45:47.637628 1753749 start.go:317] acquired machines lock for "default-k8s-different-port-20210811014415-1387367" in 63.581µs
	I0811 01:45:47.637653 1753749 start.go:93] Skipping create...Using existing machine configuration
	I0811 01:45:47.637659 1753749 fix.go:55] fixHost starting: 
	I0811 01:45:47.637972 1753749 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210811014415-1387367 --format={{.State.Status}}
	I0811 01:45:47.689990 1753749 fix.go:108] recreateIfNeeded on default-k8s-different-port-20210811014415-1387367: state=Stopped err=<nil>
	W0811 01:45:47.690021 1753749 fix.go:134] unexpected machine state, will restart: <nil>
	I0811 01:45:47.692743 1753749 out.go:177] * Restarting existing docker container for "default-k8s-different-port-20210811014415-1387367" ...
	I0811 01:45:47.692810 1753749 cli_runner.go:115] Run: docker start default-k8s-different-port-20210811014415-1387367
	I0811 01:45:48.074390 1753749 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210811014415-1387367 --format={{.State.Status}}
	I0811 01:45:48.122842 1753749 kic.go:420] container "default-k8s-different-port-20210811014415-1387367" state is running.
	I0811 01:45:48.123283 1753749 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20210811014415-1387367
	I0811 01:45:48.161215 1753749 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/default-k8s-different-port-20210811014415-1387367/config.json ...
	I0811 01:45:48.161815 1753749 machine.go:88] provisioning docker machine ...
	I0811 01:45:48.161846 1753749 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20210811014415-1387367"
	I0811 01:45:48.161942 1753749 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210811014415-1387367
	I0811 01:45:48.203776 1753749 main.go:130] libmachine: Using SSH client type: native
	I0811 01:45:48.204595 1753749 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil>  [] 0s} 127.0.0.1 50455 <nil> <nil>}
	I0811 01:45:48.204619 1753749 main.go:130] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20210811014415-1387367 && echo "default-k8s-different-port-20210811014415-1387367" | sudo tee /etc/hostname
	I0811 01:45:48.205369 1753749 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0811 01:45:51.338738 1753749 main.go:130] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20210811014415-1387367
	
	I0811 01:45:51.338890 1753749 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210811014415-1387367
	I0811 01:45:51.377855 1753749 main.go:130] libmachine: Using SSH client type: native
	I0811 01:45:51.378041 1753749 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil>  [] 0s} 127.0.0.1 50455 <nil> <nil>}
	I0811 01:45:51.378073 1753749 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20210811014415-1387367' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20210811014415-1387367/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20210811014415-1387367' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0811 01:45:51.504709 1753749 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0811 01:45:51.504738 1753749 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/k
ey.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube}
	I0811 01:45:51.504774 1753749 ubuntu.go:177] setting up certificates
	I0811 01:45:51.504785 1753749 provision.go:83] configureAuth start
	I0811 01:45:51.504854 1753749 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20210811014415-1387367
	I0811 01:45:51.555097 1753749 provision.go:137] copyHostCerts
	I0811 01:45:51.555156 1753749 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem, removing ...
	I0811 01:45:51.555164 1753749 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem
	I0811 01:45:51.555233 1753749 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem (1082 bytes)
	I0811 01:45:51.555318 1753749 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem, removing ...
	I0811 01:45:51.555324 1753749 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem
	I0811 01:45:51.555351 1753749 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem (1123 bytes)
	I0811 01:45:51.555402 1753749 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem, removing ...
	I0811 01:45:51.555406 1753749 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem
	I0811 01:45:51.555426 1753749 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem (1679 bytes)
	I0811 01:45:51.555465 1753749 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20210811014415-1387367 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20210811014415-1387367]
	I0811 01:45:52.280968 1753749 provision.go:171] copyRemoteCerts
	I0811 01:45:52.281069 1753749 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0811 01:45:52.281115 1753749 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210811014415-1387367
	I0811 01:45:52.314003 1753749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50455 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/default-k8s-different-port-20210811014415-1387367/id_rsa Username:docker}
	I0811 01:45:52.400102 1753749 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0811 01:45:52.417779 1753749 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem --> /etc/docker/server.pem (1314 bytes)
	I0811 01:45:52.435807 1753749 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0811 01:45:52.452922 1753749 provision.go:86] duration metric: configureAuth took 948.116695ms
	I0811 01:45:52.452951 1753749 ubuntu.go:193] setting minikube options for container-runtime
	I0811 01:45:52.453221 1753749 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210811014415-1387367
	I0811 01:45:52.486251 1753749 main.go:130] libmachine: Using SSH client type: native
	I0811 01:45:52.486426 1753749 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil>  [] 0s} 127.0.0.1 50455 <nil> <nil>}
	I0811 01:45:52.486443 1753749 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0811 01:45:52.601274 1753749 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0811 01:45:52.601297 1753749 ubuntu.go:71] root file system type: overlay
	I0811 01:45:52.601456 1753749 provision.go:308] Updating docker unit: /lib/systemd/system/docker.service ...
	I0811 01:45:52.601524 1753749 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210811014415-1387367
	I0811 01:45:52.645470 1753749 main.go:130] libmachine: Using SSH client type: native
	I0811 01:45:52.645679 1753749 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil>  [] 0s} 127.0.0.1 50455 <nil> <nil>}
	I0811 01:45:52.645790 1753749 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0811 01:45:52.778571 1753749 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0811 01:45:52.778653 1753749 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210811014415-1387367
	I0811 01:45:52.826552 1753749 main.go:130] libmachine: Using SSH client type: native
	I0811 01:45:52.826734 1753749 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil>  [] 0s} 127.0.0.1 50455 <nil> <nil>}
	I0811 01:45:52.826765 1753749 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0811 01:45:52.958728 1753749 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0811 01:45:52.958795 1753749 machine.go:91] provisioned docker machine in 4.796962315s
	I0811 01:45:52.958817 1753749 start.go:267] post-start starting for "default-k8s-different-port-20210811014415-1387367" (driver="docker")
	I0811 01:45:52.958871 1753749 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0811 01:45:52.958964 1753749 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0811 01:45:52.959034 1753749 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210811014415-1387367
	I0811 01:45:53.002232 1753749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50455 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/default-k8s-different-port-20210811014415-1387367/id_rsa Username:docker}
	I0811 01:45:53.089263 1753749 ssh_runner.go:149] Run: cat /etc/os-release
	I0811 01:45:53.092578 1753749 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0811 01:45:53.092606 1753749 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0811 01:45:53.092617 1753749 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0811 01:45:53.092625 1753749 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0811 01:45:53.092634 1753749 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/addons for local assets ...
	I0811 01:45:53.092691 1753749 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files for local assets ...
	I0811 01:45:53.092775 1753749 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/13873672.pem -> 13873672.pem in /etc/ssl/certs
	I0811 01:45:53.092878 1753749 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0811 01:45:53.100440 1753749 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/13873672.pem --> /etc/ssl/certs/13873672.pem (1708 bytes)
	I0811 01:45:53.134207 1753749 start.go:270] post-start completed in 175.329617ms
	I0811 01:45:53.134274 1753749 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0811 01:45:53.134317 1753749 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210811014415-1387367
	I0811 01:45:53.188904 1753749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50455 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/default-k8s-different-port-20210811014415-1387367/id_rsa Username:docker}
	I0811 01:45:53.286091 1753749 fix.go:57] fixHost completed within 5.648424521s
	I0811 01:45:53.286164 1753749 start.go:80] releasing machines lock for "default-k8s-different-port-20210811014415-1387367", held for 5.648525706s
	I0811 01:45:53.286288 1753749 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20210811014415-1387367
	I0811 01:45:53.328841 1753749 ssh_runner.go:149] Run: systemctl --version
	I0811 01:45:53.328895 1753749 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210811014415-1387367
	I0811 01:45:53.329151 1753749 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0811 01:45:53.329210 1753749 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210811014415-1387367
	I0811 01:45:53.386595 1753749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50455 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/default-k8s-different-port-20210811014415-1387367/id_rsa Username:docker}
	I0811 01:45:53.388217 1753749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50455 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/default-k8s-different-port-20210811014415-1387367/id_rsa Username:docker}
	I0811 01:45:53.473004 1753749 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0811 01:45:53.611769 1753749 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0811 01:45:53.621525 1753749 cruntime.go:249] skipping containerd shutdown because we are bound to it
	I0811 01:45:53.621590 1753749 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0811 01:45:53.631513 1753749 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0811 01:45:53.644160 1753749 ssh_runner.go:149] Run: sudo systemctl unmask docker.service
	I0811 01:45:53.723771 1753749 ssh_runner.go:149] Run: sudo systemctl enable docker.socket
	I0811 01:45:53.809175 1753749 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0811 01:45:53.818877 1753749 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0811 01:45:53.908162 1753749 ssh_runner.go:149] Run: sudo systemctl start docker
	I0811 01:45:53.917357 1753749 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0811 01:45:53.971737 1753749 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0811 01:45:54.028882 1753749 out.go:204] * Preparing Kubernetes v1.21.3 on Docker 20.10.7 ...
	I0811 01:45:54.028973 1753749 cli_runner.go:115] Run: docker network inspect default-k8s-different-port-20210811014415-1387367 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0811 01:45:54.061436 1753749 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0811 01:45:54.064530 1753749 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0811 01:45:54.073304 1753749 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime docker
	I0811 01:45:54.073375 1753749 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0811 01:45:54.124680 1753749 docker.go:535] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.21.3
	k8s.gcr.io/kube-controller-manager:v1.21.3
	k8s.gcr.io/kube-proxy:v1.21.3
	k8s.gcr.io/kube-scheduler:v1.21.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.4.1
	kubernetesui/dashboard:v2.1.0
	k8s.gcr.io/coredns/coredns:v1.8.0
	k8s.gcr.io/etcd:3.4.13-0
	kubernetesui/metrics-scraper:v1.0.4
	busybox:1.28.4-glibc
	
	-- /stdout --
	I0811 01:45:54.124704 1753749 docker.go:466] Images already preloaded, skipping extraction
	I0811 01:45:54.124760 1753749 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0811 01:45:54.177389 1753749 docker.go:535] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.21.3
	k8s.gcr.io/kube-proxy:v1.21.3
	k8s.gcr.io/kube-controller-manager:v1.21.3
	k8s.gcr.io/kube-scheduler:v1.21.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.4.1
	kubernetesui/dashboard:v2.1.0
	k8s.gcr.io/coredns/coredns:v1.8.0
	k8s.gcr.io/etcd:3.4.13-0
	kubernetesui/metrics-scraper:v1.0.4
	busybox:1.28.4-glibc
	
	-- /stdout --
	I0811 01:45:54.177416 1753749 cache_images.go:74] Images are preloaded, skipping loading
	I0811 01:45:54.177470 1753749 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}}
	I0811 01:45:54.318486 1753749 cni.go:93] Creating CNI manager for ""
	I0811 01:45:54.318509 1753749 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0811 01:45:54.318520 1753749 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0811 01:45:54.318556 1753749 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8444 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20210811014415-1387367 NodeName:default-k8s-different-port-20210811014415-1387367 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2
CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0811 01:45:54.318722 1753749 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "default-k8s-different-port-20210811014415-1387367"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0811 01:45:54.318818 1753749 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=default-k8s-different-port-20210811014415-1387367 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:default-k8s-different-port-20210811014415-1387367 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0811 01:45:54.318901 1753749 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0811 01:45:54.328734 1753749 binaries.go:44] Found k8s binaries, skipping transfer
	I0811 01:45:54.328802 1753749 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0811 01:45:54.337700 1753749 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0811 01:45:54.356795 1753749 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0811 01:45:54.375053 1753749 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2092 bytes)
	I0811 01:45:54.392023 1753749 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0811 01:45:54.396955 1753749 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0811 01:45:54.406544 1753749 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/default-k8s-different-port-20210811014415-1387367 for IP: 192.168.49.2
	I0811 01:45:54.406656 1753749 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key
	I0811 01:45:54.406705 1753749 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key
	I0811 01:45:54.406800 1753749 certs.go:290] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/default-k8s-different-port-20210811014415-1387367/client.key
	I0811 01:45:54.406848 1753749 certs.go:290] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/default-k8s-different-port-20210811014415-1387367/apiserver.key.dd3b5fb2
	I0811 01:45:54.406880 1753749 certs.go:290] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/default-k8s-different-port-20210811014415-1387367/proxy-client.key
	I0811 01:45:54.407010 1753749 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/1387367.pem (1338 bytes)
	W0811 01:45:54.407084 1753749 certs.go:369] ignoring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/1387367_empty.pem, impossibly tiny 0 bytes
	I0811 01:45:54.407107 1753749 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem (1675 bytes)
	I0811 01:45:54.407172 1753749 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem (1082 bytes)
	I0811 01:45:54.407236 1753749 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem (1123 bytes)
	I0811 01:45:54.407291 1753749 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem (1679 bytes)
	I0811 01:45:54.407367 1753749 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/13873672.pem (1708 bytes)
	I0811 01:45:54.408720 1753749 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/default-k8s-different-port-20210811014415-1387367/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0811 01:45:54.433076 1753749 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/default-k8s-different-port-20210811014415-1387367/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0811 01:45:54.453582 1753749 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/default-k8s-different-port-20210811014415-1387367/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0811 01:45:54.471446 1753749 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/default-k8s-different-port-20210811014415-1387367/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0811 01:45:54.499225 1753749 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0811 01:45:54.518993 1753749 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0811 01:45:54.543962 1753749 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0811 01:45:54.569262 1753749 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0811 01:45:54.590847 1753749 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/13873672.pem --> /usr/share/ca-certificates/13873672.pem (1708 bytes)
	I0811 01:45:54.614326 1753749 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0811 01:45:54.650289 1753749 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/1387367.pem --> /usr/share/ca-certificates/1387367.pem (1338 bytes)
	I0811 01:45:54.672772 1753749 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0811 01:45:54.688217 1753749 ssh_runner.go:149] Run: openssl version
	I0811 01:45:54.693821 1753749 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1387367.pem && ln -fs /usr/share/ca-certificates/1387367.pem /etc/ssl/certs/1387367.pem"
	I0811 01:45:54.706200 1753749 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1387367.pem
	I0811 01:45:54.709503 1753749 certs.go:416] hashing: -rw-r--r-- 1 root root 1338 Aug 11 00:46 /usr/share/ca-certificates/1387367.pem
	I0811 01:45:54.709576 1753749 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1387367.pem
	I0811 01:45:54.714506 1753749 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1387367.pem /etc/ssl/certs/51391683.0"
	I0811 01:45:54.723152 1753749 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13873672.pem && ln -fs /usr/share/ca-certificates/13873672.pem /etc/ssl/certs/13873672.pem"
	I0811 01:45:54.733631 1753749 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/13873672.pem
	I0811 01:45:54.737599 1753749 certs.go:416] hashing: -rw-r--r-- 1 root root 1708 Aug 11 00:46 /usr/share/ca-certificates/13873672.pem
	I0811 01:45:54.737695 1753749 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13873672.pem
	I0811 01:45:54.743163 1753749 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/13873672.pem /etc/ssl/certs/3ec20f2e.0"
	I0811 01:45:54.752130 1753749 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0811 01:45:54.759812 1753749 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0811 01:45:54.763323 1753749 certs.go:416] hashing: -rw-r--r-- 1 root root 1111 Aug 11 00:30 /usr/share/ca-certificates/minikubeCA.pem
	I0811 01:45:54.763419 1753749 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0811 01:45:54.768456 1753749 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0811 01:45:54.779243 1753749 kubeadm.go:390] StartCluster: {Name:default-k8s-different-port-20210811014415-1387367 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:default-k8s-different-port-20210811014415-1387367 Namespace:default APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeReq
uested:false ExtraDisks:0}
	I0811 01:45:54.779415 1753749 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0811 01:45:54.840510 1753749 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0811 01:45:54.848786 1753749 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0811 01:45:54.848821 1753749 kubeadm.go:600] restartCluster start
	I0811 01:45:54.848906 1753749 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0811 01:45:54.857492 1753749 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0811 01:45:54.858501 1753749 kubeconfig.go:117] verify returned: extract IP: "default-k8s-different-port-20210811014415-1387367" does not appear in /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	I0811 01:45:54.858786 1753749 kubeconfig.go:128] "default-k8s-different-port-20210811014415-1387367" context is missing from /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig - will repair!
	I0811 01:45:54.859366 1753749 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig: {Name:mka174137207b71bb699e0c641682c96161f87c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 01:45:54.862185 1753749 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0811 01:45:54.871137 1753749 api_server.go:164] Checking apiserver status ...
	I0811 01:45:54.871196 1753749 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 01:45:54.883617 1753749 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 01:45:55.083982 1753749 api_server.go:164] Checking apiserver status ...
	I0811 01:45:55.084057 1753749 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 01:45:55.094458 1753749 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 01:45:55.283711 1753749 api_server.go:164] Checking apiserver status ...
	I0811 01:45:55.283845 1753749 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 01:45:55.294256 1753749 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 01:45:55.484538 1753749 api_server.go:164] Checking apiserver status ...
	I0811 01:45:55.484648 1753749 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 01:45:55.495420 1753749 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 01:45:55.684695 1753749 api_server.go:164] Checking apiserver status ...
	I0811 01:45:55.684790 1753749 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 01:45:55.696654 1753749 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 01:45:55.883980 1753749 api_server.go:164] Checking apiserver status ...
	I0811 01:45:55.884062 1753749 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 01:45:55.895053 1753749 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 01:45:56.084243 1753749 api_server.go:164] Checking apiserver status ...
	I0811 01:45:56.084406 1753749 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 01:45:56.095426 1753749 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 01:45:56.283705 1753749 api_server.go:164] Checking apiserver status ...
	I0811 01:45:56.283784 1753749 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 01:45:56.295103 1753749 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 01:45:56.484301 1753749 api_server.go:164] Checking apiserver status ...
	I0811 01:45:56.484381 1753749 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 01:45:56.494856 1753749 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 01:45:56.684156 1753749 api_server.go:164] Checking apiserver status ...
	I0811 01:45:56.684242 1753749 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 01:45:56.694865 1753749 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 01:45:56.884153 1753749 api_server.go:164] Checking apiserver status ...
	I0811 01:45:56.884314 1753749 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 01:45:56.896404 1753749 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 01:45:57.084685 1753749 api_server.go:164] Checking apiserver status ...
	I0811 01:45:57.084763 1753749 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 01:45:57.095751 1753749 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 01:45:57.284342 1753749 api_server.go:164] Checking apiserver status ...
	I0811 01:45:57.284450 1753749 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 01:45:57.298031 1753749 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 01:45:57.484350 1753749 api_server.go:164] Checking apiserver status ...
	I0811 01:45:57.484435 1753749 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 01:45:57.495598 1753749 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 01:45:57.683862 1753749 api_server.go:164] Checking apiserver status ...
	I0811 01:45:57.683937 1753749 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 01:45:57.694411 1753749 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 01:45:57.884711 1753749 api_server.go:164] Checking apiserver status ...
	I0811 01:45:57.884786 1753749 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 01:45:57.895432 1753749 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 01:45:57.895449 1753749 api_server.go:164] Checking apiserver status ...
	I0811 01:45:57.895491 1753749 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 01:45:57.905880 1753749 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 01:45:57.905900 1753749 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0811 01:45:57.905907 1753749 kubeadm.go:1032] stopping kube-system containers ...
	I0811 01:45:57.905958 1753749 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0811 01:45:57.951018 1753749 docker.go:367] Stopping containers: [0c1c262fae17 92a1078f11f1 cc914756a133 caa799228c18 61c74c5a419d 80790ff019d9 357c4073e814 eb642f08c453 55fd1f7d1917 3e2b318f5907 56ec91033ad1 085ef33081d3 ae7478ff7180 11a7b9e3ae0a bb1a60d2f794]
	I0811 01:45:57.951094 1753749 ssh_runner.go:149] Run: docker stop 0c1c262fae17 92a1078f11f1 cc914756a133 caa799228c18 61c74c5a419d 80790ff019d9 357c4073e814 eb642f08c453 55fd1f7d1917 3e2b318f5907 56ec91033ad1 085ef33081d3 ae7478ff7180 11a7b9e3ae0a bb1a60d2f794
	I0811 01:45:57.993657 1753749 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0811 01:45:58.004229 1753749 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0811 01:45:58.011602 1753749 kubeadm.go:154] found existing configuration files:
	-rw------- 1 root root 5639 Aug 11 01:44 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Aug 11 01:44 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2135 Aug 11 01:44 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Aug 11 01:44 /etc/kubernetes/scheduler.conf
	
	I0811 01:45:58.011663 1753749 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0811 01:45:58.018754 1753749 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0811 01:45:58.025806 1753749 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0811 01:45:58.032475 1753749 kubeadm.go:165] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0811 01:45:58.032552 1753749 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0811 01:45:58.038927 1753749 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0811 01:45:58.045495 1753749 kubeadm.go:165] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0811 01:45:58.045578 1753749 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0811 01:45:58.051871 1753749 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0811 01:45:58.058728 1753749 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0811 01:45:58.058753 1753749 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0811 01:45:58.282363 1753749 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0811 01:46:00.500506 1753749 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.218070097s)
	I0811 01:46:00.500533 1753749 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0811 01:46:00.763646 1753749 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0811 01:46:01.001343 1753749 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0811 01:46:01.308930 1753749 api_server.go:50] waiting for apiserver process to appear ...
	I0811 01:46:01.309002 1753749 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0811 01:46:01.824835 1753749 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2021-08-11 01:23:39 UTC, end at Wed 2021-08-11 01:46:07 UTC. --
	Aug 11 01:25:51 old-k8s-version-20210811011523-1387367 dockerd[205]: time="2021-08-11T01:25:51.870015867Z" level=warning msg="Error getting v2 registry: Get https://fake.domain/v2/: dial tcp: lookup fake.domain: no such host"
	Aug 11 01:25:51 old-k8s-version-20210811011523-1387367 dockerd[205]: time="2021-08-11T01:25:51.870060363Z" level=info msg="Attempting next endpoint for pull after error: Get https://fake.domain/v2/: dial tcp: lookup fake.domain: no such host"
	Aug 11 01:25:51 old-k8s-version-20210811011523-1387367 dockerd[205]: time="2021-08-11T01:25:51.885895617Z" level=error msg="Handler for POST /v1.38/images/create returned error: Get https://fake.domain/v2/: dial tcp: lookup fake.domain: no such host"
	Aug 11 01:27:24 old-k8s-version-20210811011523-1387367 dockerd[205]: time="2021-08-11T01:27:24.903541231Z" level=warning msg="Error getting v2 registry: Get https://fake.domain/v2/: dial tcp: lookup fake.domain: no such host"
	Aug 11 01:27:24 old-k8s-version-20210811011523-1387367 dockerd[205]: time="2021-08-11T01:27:24.903586482Z" level=info msg="Attempting next endpoint for pull after error: Get https://fake.domain/v2/: dial tcp: lookup fake.domain: no such host"
	Aug 11 01:27:24 old-k8s-version-20210811011523-1387367 dockerd[205]: time="2021-08-11T01:27:24.938083080Z" level=error msg="Handler for POST /v1.38/images/create returned error: Get https://fake.domain/v2/: dial tcp: lookup fake.domain: no such host"
	Aug 11 01:28:46 old-k8s-version-20210811011523-1387367 dockerd[205]: time="2021-08-11T01:28:46.436301577Z" level=info msg="ignoring event" container=b0009bba953290df054eb200f1c6648c969ba932290a3b18bba86b0592658c17 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 01:28:46 old-k8s-version-20210811011523-1387367 dockerd[205]: time="2021-08-11T01:28:46.618006819Z" level=info msg="ignoring event" container=b90446b631963e50bef6a8a75a2a15d7a9ffff84261302a2bb96e5f11c4ced84 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 01:28:46 old-k8s-version-20210811011523-1387367 dockerd[205]: time="2021-08-11T01:28:46.762705833Z" level=info msg="ignoring event" container=7cff22af5236a77397c7e7eeacbfa78789f22b94b78019ac5dfcd3facc0bb2e8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 01:28:46 old-k8s-version-20210811011523-1387367 dockerd[205]: time="2021-08-11T01:28:46.994643169Z" level=info msg="ignoring event" container=3860fcb8a51501d26cdce4211fd18a42572b2c892f9c87405de72bd0cbf2f8bc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 01:28:47 old-k8s-version-20210811011523-1387367 dockerd[205]: time="2021-08-11T01:28:47.242630986Z" level=info msg="ignoring event" container=df7f7346c1a581282e5739d9eed6f24c8fab35f05c99b7a6975bba0b2cd6c24d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 01:28:47 old-k8s-version-20210811011523-1387367 dockerd[205]: time="2021-08-11T01:28:47.381704182Z" level=info msg="ignoring event" container=3f9eebfaea242c9341d72f32270da84f72d2e901a09515f0cabf7e95b1ffff7a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 01:28:47 old-k8s-version-20210811011523-1387367 dockerd[205]: time="2021-08-11T01:28:47.553688353Z" level=info msg="ignoring event" container=55095c59e1f79aeb336fb4f44f820b26e94e0adc1a4fb56c0d25cdb1855af32d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 01:28:47 old-k8s-version-20210811011523-1387367 dockerd[205]: time="2021-08-11T01:28:47.786162375Z" level=info msg="ignoring event" container=c752dfd714905714e39d391be52164555ebe6ee7ef189700544b54d3eccb9b2c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 01:28:47 old-k8s-version-20210811011523-1387367 dockerd[205]: time="2021-08-11T01:28:47.952604740Z" level=info msg="ignoring event" container=6e530d8b48abc859fa9edd24b02f7a9f53b9b9afdd1ebcb36ae58f9bbfb30d59 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 01:28:48 old-k8s-version-20210811011523-1387367 dockerd[205]: time="2021-08-11T01:28:48.206372811Z" level=info msg="ignoring event" container=dc64339803c555dbb29e0fe78fc3a7a221a73bc9c401d00f52d17f6cb3799ca5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 01:28:48 old-k8s-version-20210811011523-1387367 dockerd[205]: time="2021-08-11T01:28:48.447022969Z" level=info msg="ignoring event" container=6fd8b20389eef48089c8d80a20e5c146562db66ccb85398942cfe088a623b26d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 01:28:48 old-k8s-version-20210811011523-1387367 dockerd[205]: time="2021-08-11T01:28:48.655115831Z" level=info msg="ignoring event" container=ed8e26d1b95bce2c70563bf48b69b5633d1dde1c3f78a270c36ab6a22494001f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 01:28:48 old-k8s-version-20210811011523-1387367 dockerd[205]: time="2021-08-11T01:28:48.796142089Z" level=info msg="ignoring event" container=d81ef106b7f252ec3481ed19ad0371d94b6f503f3f5be5af3e1db520e45e58d8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 01:28:48 old-k8s-version-20210811011523-1387367 dockerd[205]: time="2021-08-11T01:28:48.955646957Z" level=info msg="ignoring event" container=db7f849fbee64476ec1977de28957629089fbaa09c5ff9db1711e01720c9d231 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 01:28:49 old-k8s-version-20210811011523-1387367 dockerd[205]: time="2021-08-11T01:28:49.099985505Z" level=info msg="ignoring event" container=647e6f4b64a6bbb25d1cbcff193c1c31b6d22fe0d0fdee32c81c3c1afb6f9bbe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 01:28:49 old-k8s-version-20210811011523-1387367 dockerd[205]: time="2021-08-11T01:28:49.217629161Z" level=info msg="ignoring event" container=f3f08540d15275d79b84fb7ec21d48bf72ab04eb0f310c79f8f13c2449095d19 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 01:28:49 old-k8s-version-20210811011523-1387367 dockerd[205]: time="2021-08-11T01:28:49.383454480Z" level=info msg="ignoring event" container=ced087fa8833d50c92c3dcb0e9967772638e60d8637699901b48b17c4e2cf1d1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 01:28:49 old-k8s-version-20210811011523-1387367 dockerd[205]: time="2021-08-11T01:28:49.522846329Z" level=info msg="ignoring event" container=a99aefb77ed4a374c849fc1ff0f6f463f088407db4a95e83da76a5f6e63b41e8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 11 01:28:49 old-k8s-version-20210811011523-1387367 dockerd[205]: time="2021-08-11T01:28:49.702531964Z" level=info msg="ignoring event" container=16ecd8d66225eb19f0a42ffa9723a4fe995695367151d9e03a7f719fdd375a54 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.001093] FS-Cache: O-key=[8] '38a8010000000000'
	[  +0.000822] FS-Cache: N-cookie c=00000000aef8ae5b [p=00000000d00a921d fl=2 nc=0 na=1]
	[  +0.001337] FS-Cache: N-cookie d=00000000d0f41ca1 n=00000000cf2b9e77
	[  +0.001079] FS-Cache: N-key=[8] '38a8010000000000'
	[  +0.008061] FS-Cache: Duplicate cookie detected
	[  +0.000824] FS-Cache: O-cookie c=000000009e8af87d [p=00000000d00a921d fl=226 nc=0 na=1]
	[  +0.001345] FS-Cache: O-cookie d=00000000d0f41ca1 n=00000000882d24dd
	[  +0.001078] FS-Cache: O-key=[8] '38a8010000000000'
	[  +0.000828] FS-Cache: N-cookie c=00000000aef8ae5b [p=00000000d00a921d fl=2 nc=0 na=1]
	[  +0.001344] FS-Cache: N-cookie d=00000000d0f41ca1 n=000000006ce4882d
	[  +0.001069] FS-Cache: N-key=[8] '38a8010000000000'
	[  +1.509820] FS-Cache: Duplicate cookie detected
	[  +0.000799] FS-Cache: O-cookie c=00000000e1eedaf3 [p=00000000d00a921d fl=226 nc=0 na=1]
	[  +0.001318] FS-Cache: O-cookie d=00000000d0f41ca1 n=0000000025fbee24
	[  +0.001053] FS-Cache: O-key=[8] '37a8010000000000'
	[  +0.000829] FS-Cache: N-cookie c=000000006f83a19d [p=00000000d00a921d fl=2 nc=0 na=1]
	[  +0.001316] FS-Cache: N-cookie d=00000000d0f41ca1 n=00000000d322ea0c
	[  +0.001048] FS-Cache: N-key=[8] '37a8010000000000'
	[  +0.277640] FS-Cache: Duplicate cookie detected
	[  +0.000818] FS-Cache: O-cookie c=000000007ae3c387 [p=00000000d00a921d fl=226 nc=0 na=1]
	[  +0.001327] FS-Cache: O-cookie d=00000000d0f41ca1 n=000000004bd4688e
	[  +0.001069] FS-Cache: O-key=[8] '3ca8010000000000'
	[  +0.000853] FS-Cache: N-cookie c=0000000007642642 [p=00000000d00a921d fl=2 nc=0 na=1]
	[  +0.001309] FS-Cache: N-cookie d=00000000d0f41ca1 n=00000000ae88504f
	[  +0.001071] FS-Cache: N-key=[8] '3ca8010000000000'
	
	* 
	* ==> kernel <==
	*  01:46:09 up 11:28,  0 users,  load average: 6.60, 4.11, 3.26
	Linux old-k8s-version-20210811011523-1387367 5.8.0-1041-aws #43~20.04.1-Ubuntu SMP Thu Jul 15 11:03:27 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2021-08-11 01:23:39 UTC, end at Wed 2021-08-11 01:46:09 UTC. --
	Aug 11 01:46:08 old-k8s-version-20210811011523-1387367 kubelet[143895]: I0811 01:46:08.692424  143895 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
	Aug 11 01:46:08 old-k8s-version-20210811011523-1387367 kubelet[143895]: I0811 01:46:08.692466  143895 status_manager.go:152] Starting to sync pod status with apiserver
	Aug 11 01:46:08 old-k8s-version-20210811011523-1387367 kubelet[143895]: I0811 01:46:08.692486  143895 kubelet.go:1806] Starting kubelet main sync loop.
	Aug 11 01:46:08 old-k8s-version-20210811011523-1387367 kubelet[143895]: I0811 01:46:08.692500  143895 kubelet.go:1823] skipping pod synchronization - [container runtime status check may not have completed yet., PLEG is not healthy: pleg has yet to be successful.]
	Aug 11 01:46:08 old-k8s-version-20210811011523-1387367 kubelet[143895]: E0811 01:46:08.695893  143895 event.go:200] Unable to write event: 'Post https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events: dial tcp 192.168.58.2:8443: connect: connection refused' (may retry after sleeping)
	Aug 11 01:46:08 old-k8s-version-20210811011523-1387367 kubelet[143895]: I0811 01:46:08.702394  143895 volume_manager.go:248] Starting Kubelet Volume Manager
	Aug 11 01:46:08 old-k8s-version-20210811011523-1387367 kubelet[143895]: E0811 01:46:08.706550  143895 controller.go:115] failed to ensure node lease exists, will retry in 200ms, error: Get https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/old-k8s-version-20210811011523-1387367?timeout=10s: dial tcp 192.168.58.2:8443: connect: connection refused
	Aug 11 01:46:08 old-k8s-version-20210811011523-1387367 kubelet[143895]: E0811 01:46:08.713256  143895 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: Get https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0: dial tcp 192.168.58.2:8443: connect: connection refused
	Aug 11 01:46:08 old-k8s-version-20210811011523-1387367 kubelet[143895]: I0811 01:46:08.723183  143895 desired_state_of_world_populator.go:130] Desired state populator starts to run
	Aug 11 01:46:08 old-k8s-version-20210811011523-1387367 kubelet[143895]: I0811 01:46:08.795904  143895 kubelet.go:1823] skipping pod synchronization - container runtime status check may not have completed yet.
	Aug 11 01:46:08 old-k8s-version-20210811011523-1387367 kubelet[143895]: E0811 01:46:08.797360  143895 kubelet.go:2244] node "old-k8s-version-20210811011523-1387367" not found
	Aug 11 01:46:08 old-k8s-version-20210811011523-1387367 kubelet[143895]: I0811 01:46:08.823319  143895 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach
	Aug 11 01:46:08 old-k8s-version-20210811011523-1387367 kubelet[143895]: I0811 01:46:08.879058  143895 clientconn.go:440] parsed scheme: "unix"
	Aug 11 01:46:08 old-k8s-version-20210811011523-1387367 kubelet[143895]: I0811 01:46:08.879080  143895 clientconn.go:440] scheme "unix" not registered, fallback to default scheme
	Aug 11 01:46:08 old-k8s-version-20210811011523-1387367 kubelet[143895]: I0811 01:46:08.879128  143895 asm_arm64.s:1128] ccResolverWrapper: sending new addresses to cc: [{unix:///run/containerd/containerd.sock 0  <nil>}]
	Aug 11 01:46:08 old-k8s-version-20210811011523-1387367 kubelet[143895]: I0811 01:46:08.879143  143895 clientconn.go:796] ClientConn switching balancer to "pick_first"
	Aug 11 01:46:08 old-k8s-version-20210811011523-1387367 kubelet[143895]: I0811 01:46:08.879184  143895 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0x400090a2a0, CONNECTING
	Aug 11 01:46:08 old-k8s-version-20210811011523-1387367 kubelet[143895]: I0811 01:46:08.879315  143895 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0x400090a2a0, READY
	Aug 11 01:46:08 old-k8s-version-20210811011523-1387367 kubelet[143895]: E0811 01:46:08.901184  143895 kubelet.go:2244] node "old-k8s-version-20210811011523-1387367" not found
	Aug 11 01:46:08 old-k8s-version-20210811011523-1387367 kubelet[143895]: E0811 01:46:08.933442  143895 controller.go:115] failed to ensure node lease exists, will retry in 400ms, error: Get https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/old-k8s-version-20210811011523-1387367?timeout=10s: dial tcp 192.168.58.2:8443: connect: connection refused
	Aug 11 01:46:09 old-k8s-version-20210811011523-1387367 kubelet[143895]: E0811 01:46:09.026936  143895 kubelet.go:2244] node "old-k8s-version-20210811011523-1387367" not found
	Aug 11 01:46:09 old-k8s-version-20210811011523-1387367 kubelet[143895]: I0811 01:46:09.037669  143895 kubelet.go:1823] skipping pod synchronization - container runtime status check may not have completed yet.
	Aug 11 01:46:09 old-k8s-version-20210811011523-1387367 kubelet[143895]: I0811 01:46:09.039057  143895 kubelet_node_status.go:72] Attempting to register node old-k8s-version-20210811011523-1387367
	Aug 11 01:46:09 old-k8s-version-20210811011523-1387367 kubelet[143895]: E0811 01:46:09.041587  143895 kubelet_node_status.go:94] Unable to register node "old-k8s-version-20210811011523-1387367" with API server: Post https://control-plane.minikube.internal:8443/api/v1/nodes: dial tcp 192.168.58.2:8443: connect: connection refused
	Aug 11 01:46:09 old-k8s-version-20210811011523-1387367 kubelet[143895]: E0811 01:46:09.139248  143895 kubelet.go:2244] node "old-k8s-version-20210811011523-1387367" not found
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0811 01:46:09.086199 1758716 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (419.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p cilium-20210811011758-1387367 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:98: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p cilium-20210811011758-1387367 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=docker: signal: killed (6m59.064440673s)

                                                
                                                
-- stdout --
	* [cilium-20210811011758-1387367] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=12230
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	* Using the docker driver based on user configuration
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	* Starting control plane node cilium-20210811011758-1387367 in cluster cilium-20210811011758-1387367
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.21.3 on Docker 20.10.7 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring Cilium (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass

                                                
                                                
-- /stdout --
** stderr ** 
	I0811 01:50:59.831732 1781805 out.go:298] Setting OutFile to fd 1 ...
	I0811 01:50:59.831965 1781805 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0811 01:50:59.831990 1781805 out.go:311] Setting ErrFile to fd 2...
	I0811 01:50:59.832006 1781805 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0811 01:50:59.832178 1781805 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/bin
	I0811 01:50:59.832509 1781805 out.go:305] Setting JSON to false
	I0811 01:50:59.833659 1781805 start.go:111] hostinfo: {"hostname":"ip-172-31-31-251","uptime":41607,"bootTime":1628605053,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0811 01:50:59.833779 1781805 start.go:121] virtualization:  
	I0811 01:50:59.837185 1781805 out.go:177] * [cilium-20210811011758-1387367] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	I0811 01:50:59.837330 1781805 notify.go:169] Checking for updates...
	I0811 01:50:59.840764 1781805 out.go:177]   - MINIKUBE_LOCATION=12230
	I0811 01:50:59.842837 1781805 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	I0811 01:50:59.845059 1781805 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube
	I0811 01:50:59.846956 1781805 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0811 01:50:59.847540 1781805 driver.go:335] Setting default libvirt URI to qemu:///system
	I0811 01:50:59.900835 1781805 docker.go:132] docker version: linux-20.10.8
	I0811 01:50:59.900996 1781805 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0811 01:51:00.054506 1781805 info.go:263] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:46 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:39 SystemTime:2021-08-11 01:50:59.971713936 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226263040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0811 01:51:00.054631 1781805 docker.go:244] overlay module found
	I0811 01:51:00.057752 1781805 out.go:177] * Using the docker driver based on user configuration
	I0811 01:51:00.057776 1781805 start.go:278] selected driver: docker
	I0811 01:51:00.057783 1781805 start.go:751] validating driver "docker" against <nil>
	I0811 01:51:00.057802 1781805 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0811 01:51:00.057847 1781805 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0811 01:51:00.057860 1781805 out.go:242] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0811 01:51:00.059927 1781805 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0811 01:51:00.060294 1781805 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0811 01:51:00.176529 1781805 info.go:263] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:46 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:39 SystemTime:2021-08-11 01:51:00.098618786 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226263040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0811 01:51:00.176642 1781805 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0811 01:51:00.176810 1781805 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0811 01:51:00.176826 1781805 cni.go:93] Creating CNI manager for "cilium"
	I0811 01:51:00.176832 1781805 start_flags.go:272] Found "Cilium" CNI - setting NetworkPlugin=cni
	I0811 01:51:00.176838 1781805 start_flags.go:277] config:
	{Name:cilium-20210811011758-1387367 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:cilium-20210811011758-1387367 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0811 01:51:00.180622 1781805 out.go:177] * Starting control plane node cilium-20210811011758-1387367 in cluster cilium-20210811011758-1387367
	I0811 01:51:00.180664 1781805 cache.go:117] Beginning downloading kic base image for docker with docker
	I0811 01:51:00.183219 1781805 out.go:177] * Pulling base image ...
	I0811 01:51:00.183265 1781805 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime docker
	I0811 01:51:00.183317 1781805 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-docker-overlay2-arm64.tar.lz4
	I0811 01:51:00.183327 1781805 cache.go:56] Caching tarball of preloaded images
	I0811 01:51:00.183510 1781805 preload.go:173] Found /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0811 01:51:00.183527 1781805 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on docker
	I0811 01:51:00.183637 1781805 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/cilium-20210811011758-1387367/config.json ...
	I0811 01:51:00.183660 1781805 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/cilium-20210811011758-1387367/config.json: {Name:mk8773fadf0919abb2ca76e27e63a2a84a1532d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 01:51:00.183819 1781805 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	I0811 01:51:00.265347 1781805 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0811 01:51:00.265375 1781805 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0811 01:51:00.265386 1781805 cache.go:205] Successfully downloaded all kic artifacts
	I0811 01:51:00.265422 1781805 start.go:313] acquiring machines lock for cilium-20210811011758-1387367: {Name:mkdf3de4d13d112bd041e21f9283ea941a168721 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0811 01:51:00.265560 1781805 start.go:317] acquired machines lock for "cilium-20210811011758-1387367" in 113.058µs
	I0811 01:51:00.265596 1781805 start.go:89] Provisioning new machine with config: &{Name:cilium-20210811011758-1387367 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:cilium-20210811011758-1387367 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0811 01:51:00.265677 1781805 start.go:126] createHost starting for "" (driver="docker")
	I0811 01:51:00.268554 1781805 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0811 01:51:00.268825 1781805 start.go:160] libmachine.API.Create for "cilium-20210811011758-1387367" (driver="docker")
	I0811 01:51:00.268852 1781805 client.go:168] LocalClient.Create starting
	I0811 01:51:00.268989 1781805 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem
	I0811 01:51:00.269096 1781805 main.go:130] libmachine: Decoding PEM data...
	I0811 01:51:00.269120 1781805 main.go:130] libmachine: Parsing certificate...
	I0811 01:51:00.269231 1781805 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem
	I0811 01:51:00.269248 1781805 main.go:130] libmachine: Decoding PEM data...
	I0811 01:51:00.269260 1781805 main.go:130] libmachine: Parsing certificate...
	I0811 01:51:00.269737 1781805 cli_runner.go:115] Run: docker network inspect cilium-20210811011758-1387367 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0811 01:51:00.318752 1781805 cli_runner.go:162] docker network inspect cilium-20210811011758-1387367 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0811 01:51:00.318836 1781805 network_create.go:255] running [docker network inspect cilium-20210811011758-1387367] to gather additional debugging logs...
	I0811 01:51:00.318857 1781805 cli_runner.go:115] Run: docker network inspect cilium-20210811011758-1387367
	W0811 01:51:00.357645 1781805 cli_runner.go:162] docker network inspect cilium-20210811011758-1387367 returned with exit code 1
	I0811 01:51:00.357674 1781805 network_create.go:258] error running [docker network inspect cilium-20210811011758-1387367]: docker network inspect cilium-20210811011758-1387367: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: cilium-20210811011758-1387367
	I0811 01:51:00.357687 1781805 network_create.go:260] output of [docker network inspect cilium-20210811011758-1387367]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: cilium-20210811011758-1387367
	
	** /stderr **
	I0811 01:51:00.357742 1781805 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0811 01:51:00.397332 1781805 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-3b987835b59e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:af:11:54:b5}}
	I0811 01:51:00.397677 1781805 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.58.0:0x400000f688] misses:0}
	I0811 01:51:00.397709 1781805 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0811 01:51:00.397723 1781805 network_create.go:106] attempt to create docker network cilium-20210811011758-1387367 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0811 01:51:00.397779 1781805 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20210811011758-1387367
	I0811 01:51:00.502142 1781805 network_create.go:90] docker network cilium-20210811011758-1387367 192.168.58.0/24 created
	I0811 01:51:00.502177 1781805 kic.go:106] calculated static IP "192.168.58.2" for the "cilium-20210811011758-1387367" container
	I0811 01:51:00.502264 1781805 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0811 01:51:00.544012 1781805 cli_runner.go:115] Run: docker volume create cilium-20210811011758-1387367 --label name.minikube.sigs.k8s.io=cilium-20210811011758-1387367 --label created_by.minikube.sigs.k8s.io=true
	I0811 01:51:00.583419 1781805 oci.go:102] Successfully created a docker volume cilium-20210811011758-1387367
	I0811 01:51:00.583508 1781805 cli_runner.go:115] Run: docker run --rm --name cilium-20210811011758-1387367-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20210811011758-1387367 --entrypoint /usr/bin/test -v cilium-20210811011758-1387367:/var gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -d /var/lib
	I0811 01:51:01.343873 1781805 oci.go:106] Successfully prepared a docker volume cilium-20210811011758-1387367
	W0811 01:51:01.343926 1781805 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0811 01:51:01.343936 1781805 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0811 01:51:01.343964 1781805 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime docker
	I0811 01:51:01.343988 1781805 kic.go:179] Starting extracting preloaded images to volume ...
	I0811 01:51:01.344000 1781805 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0811 01:51:01.344048 1781805 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v cilium-20210811011758-1387367:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir
	I0811 01:51:01.548102 1781805 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cilium-20210811011758-1387367 --name cilium-20210811011758-1387367 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20210811011758-1387367 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cilium-20210811011758-1387367 --network cilium-20210811011758-1387367 --ip 192.168.58.2 --volume cilium-20210811011758-1387367:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79
	I0811 01:51:02.263760 1781805 cli_runner.go:115] Run: docker container inspect cilium-20210811011758-1387367 --format={{.State.Running}}
	I0811 01:51:02.325578 1781805 cli_runner.go:115] Run: docker container inspect cilium-20210811011758-1387367 --format={{.State.Status}}
	I0811 01:51:02.391528 1781805 cli_runner.go:115] Run: docker exec cilium-20210811011758-1387367 stat /var/lib/dpkg/alternatives/iptables
	I0811 01:51:02.531418 1781805 oci.go:278] the created container "cilium-20210811011758-1387367" has a running status.
	I0811 01:51:02.531445 1781805 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/cilium-20210811011758-1387367/id_rsa...
	I0811 01:51:02.840350 1781805 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/cilium-20210811011758-1387367/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0811 01:51:02.961092 1781805 cli_runner.go:115] Run: docker container inspect cilium-20210811011758-1387367 --format={{.State.Status}}
	I0811 01:51:03.023224 1781805 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0811 01:51:03.023242 1781805 kic_runner.go:115] Args: [docker exec --privileged cilium-20210811011758-1387367 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0811 01:51:11.990907 1781805 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v cilium-20210811011758-1387367:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir: (10.646810584s)
	I0811 01:51:11.990939 1781805 kic.go:188] duration metric: took 10.646949 seconds to extract preloaded images to volume
	I0811 01:51:11.991020 1781805 cli_runner.go:115] Run: docker container inspect cilium-20210811011758-1387367 --format={{.State.Status}}
	I0811 01:51:12.061205 1781805 machine.go:88] provisioning docker machine ...
	I0811 01:51:12.061241 1781805 ubuntu.go:169] provisioning hostname "cilium-20210811011758-1387367"
	I0811 01:51:12.061306 1781805 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210811011758-1387367
	I0811 01:51:12.108213 1781805 main.go:130] libmachine: Using SSH client type: native
	I0811 01:51:12.108416 1781805 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil>  [] 0s} 127.0.0.1 50480 <nil> <nil>}
	I0811 01:51:12.108437 1781805 main.go:130] libmachine: About to run SSH command:
	sudo hostname cilium-20210811011758-1387367 && echo "cilium-20210811011758-1387367" | sudo tee /etc/hostname
	I0811 01:51:12.250691 1781805 main.go:130] libmachine: SSH cmd err, output: <nil>: cilium-20210811011758-1387367
	
	I0811 01:51:12.250766 1781805 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210811011758-1387367
	I0811 01:51:12.305999 1781805 main.go:130] libmachine: Using SSH client type: native
	I0811 01:51:12.306165 1781805 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil>  [] 0s} 127.0.0.1 50480 <nil> <nil>}
	I0811 01:51:12.306196 1781805 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scilium-20210811011758-1387367' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cilium-20210811011758-1387367/g' /etc/hosts;
				else 
					echo '127.0.1.1 cilium-20210811011758-1387367' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0811 01:51:12.432739 1781805 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0811 01:51:12.432768 1781805 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/k
ey.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube}
	I0811 01:51:12.432789 1781805 ubuntu.go:177] setting up certificates
	I0811 01:51:12.432798 1781805 provision.go:83] configureAuth start
	I0811 01:51:12.432858 1781805 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-20210811011758-1387367
	I0811 01:51:12.482123 1781805 provision.go:137] copyHostCerts
	I0811 01:51:12.482186 1781805 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem, removing ...
	I0811 01:51:12.482194 1781805 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem
	I0811 01:51:12.482245 1781805 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem (1082 bytes)
	I0811 01:51:12.482308 1781805 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem, removing ...
	I0811 01:51:12.482315 1781805 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem
	I0811 01:51:12.482335 1781805 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem (1123 bytes)
	I0811 01:51:12.482403 1781805 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem, removing ...
	I0811 01:51:12.482409 1781805 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem
	I0811 01:51:12.482427 1781805 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem (1679 bytes)
	I0811 01:51:12.482460 1781805 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem org=jenkins.cilium-20210811011758-1387367 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube cilium-20210811011758-1387367]
	I0811 01:51:13.286542 1781805 provision.go:171] copyRemoteCerts
	I0811 01:51:13.286645 1781805 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0811 01:51:13.286713 1781805 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210811011758-1387367
	I0811 01:51:13.337785 1781805 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50480 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/cilium-20210811011758-1387367/id_rsa Username:docker}
	I0811 01:51:13.434207 1781805 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0811 01:51:13.464332 1781805 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem --> /etc/docker/server.pem (1261 bytes)
	I0811 01:51:13.494966 1781805 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0811 01:51:13.520035 1781805 provision.go:86] duration metric: configureAuth took 1.087220442s
	I0811 01:51:13.520064 1781805 ubuntu.go:193] setting minikube options for container-runtime
	I0811 01:51:13.520272 1781805 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210811011758-1387367
	I0811 01:51:13.575282 1781805 main.go:130] libmachine: Using SSH client type: native
	I0811 01:51:13.575463 1781805 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil>  [] 0s} 127.0.0.1 50480 <nil> <nil>}
	I0811 01:51:13.575481 1781805 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0811 01:51:13.722881 1781805 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0811 01:51:13.722952 1781805 ubuntu.go:71] root file system type: overlay
	I0811 01:51:13.723173 1781805 provision.go:308] Updating docker unit: /lib/systemd/system/docker.service ...
	I0811 01:51:13.723236 1781805 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210811011758-1387367
	I0811 01:51:13.776272 1781805 main.go:130] libmachine: Using SSH client type: native
	I0811 01:51:13.776445 1781805 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil>  [] 0s} 127.0.0.1 50480 <nil> <nil>}
	I0811 01:51:13.776539 1781805 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0811 01:51:13.929720 1781805 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0811 01:51:13.929811 1781805 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210811011758-1387367
	I0811 01:51:13.982707 1781805 main.go:130] libmachine: Using SSH client type: native
	I0811 01:51:13.982886 1781805 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil>  [] 0s} 127.0.0.1 50480 <nil> <nil>}
	I0811 01:51:13.982911 1781805 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0811 01:51:15.248970 1781805 main.go:130] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-06-02 11:55:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2021-08-11 01:51:13.923010724 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	+BindsTo=containerd.service
	 After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0811 01:51:15.249001 1781805 machine.go:91] provisioned docker machine in 3.187771734s
	I0811 01:51:15.249040 1781805 client.go:171] LocalClient.Create took 14.980180518s
	I0811 01:51:15.249059 1781805 start.go:168] duration metric: libmachine.API.Create for "cilium-20210811011758-1387367" took 14.980234753s
	I0811 01:51:15.249070 1781805 start.go:267] post-start starting for "cilium-20210811011758-1387367" (driver="docker")
	I0811 01:51:15.249076 1781805 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0811 01:51:15.249138 1781805 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0811 01:51:15.249193 1781805 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210811011758-1387367
	I0811 01:51:15.318130 1781805 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50480 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/cilium-20210811011758-1387367/id_rsa Username:docker}
	I0811 01:51:15.426040 1781805 ssh_runner.go:149] Run: cat /etc/os-release
	I0811 01:51:15.429523 1781805 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0811 01:51:15.429555 1781805 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0811 01:51:15.429567 1781805 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0811 01:51:15.429574 1781805 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0811 01:51:15.429586 1781805 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/addons for local assets ...
	I0811 01:51:15.429647 1781805 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files for local assets ...
	I0811 01:51:15.429730 1781805 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/13873672.pem -> 13873672.pem in /etc/ssl/certs
	I0811 01:51:15.429831 1781805 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0811 01:51:15.449447 1781805 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/13873672.pem --> /etc/ssl/certs/13873672.pem (1708 bytes)
	I0811 01:51:15.481726 1781805 start.go:270] post-start completed in 232.641335ms
	I0811 01:51:15.482144 1781805 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-20210811011758-1387367
	I0811 01:51:15.529324 1781805 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/cilium-20210811011758-1387367/config.json ...
	I0811 01:51:15.529575 1781805 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0811 01:51:15.529627 1781805 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210811011758-1387367
	I0811 01:51:15.584773 1781805 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50480 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/cilium-20210811011758-1387367/id_rsa Username:docker}
	I0811 01:51:15.669648 1781805 start.go:129] duration metric: createHost completed in 15.403958819s
	I0811 01:51:15.669671 1781805 start.go:80] releasing machines lock for "cilium-20210811011758-1387367", held for 15.404095039s
	I0811 01:51:15.669751 1781805 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-20210811011758-1387367
	I0811 01:51:15.746679 1781805 ssh_runner.go:149] Run: systemctl --version
	I0811 01:51:15.746737 1781805 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210811011758-1387367
	I0811 01:51:15.746950 1781805 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0811 01:51:15.747001 1781805 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210811011758-1387367
	I0811 01:51:15.838278 1781805 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50480 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/cilium-20210811011758-1387367/id_rsa Username:docker}
	I0811 01:51:15.845197 1781805 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50480 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/cilium-20210811011758-1387367/id_rsa Username:docker}
	I0811 01:51:16.067030 1781805 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0811 01:51:16.082531 1781805 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0811 01:51:16.097758 1781805 cruntime.go:249] skipping containerd shutdown because we are bound to it
	I0811 01:51:16.097869 1781805 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0811 01:51:16.115574 1781805 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0811 01:51:16.137525 1781805 ssh_runner.go:149] Run: sudo systemctl unmask docker.service
	I0811 01:51:16.277918 1781805 ssh_runner.go:149] Run: sudo systemctl enable docker.socket
	I0811 01:51:16.420604 1781805 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0811 01:51:16.441397 1781805 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0811 01:51:16.589379 1781805 ssh_runner.go:149] Run: sudo systemctl start docker
	I0811 01:51:16.603970 1781805 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0811 01:51:16.729367 1781805 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0811 01:51:16.827362 1781805 out.go:204] * Preparing Kubernetes v1.21.3 on Docker 20.10.7 ...
	I0811 01:51:16.827515 1781805 cli_runner.go:115] Run: docker network inspect cilium-20210811011758-1387367 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0811 01:51:16.889573 1781805 ssh_runner.go:149] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0811 01:51:16.892966 1781805 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0811 01:51:16.910380 1781805 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime docker
	I0811 01:51:16.910444 1781805 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0811 01:51:16.992016 1781805 docker.go:535] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.21.3
	k8s.gcr.io/kube-controller-manager:v1.21.3
	k8s.gcr.io/kube-proxy:v1.21.3
	k8s.gcr.io/kube-scheduler:v1.21.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.4.1
	kubernetesui/dashboard:v2.1.0
	k8s.gcr.io/coredns/coredns:v1.8.0
	k8s.gcr.io/etcd:3.4.13-0
	kubernetesui/metrics-scraper:v1.0.4
	
	-- /stdout --
	I0811 01:51:16.992037 1781805 docker.go:466] Images already preloaded, skipping extraction
	I0811 01:51:16.992095 1781805 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0811 01:51:17.065083 1781805 docker.go:535] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.21.3
	k8s.gcr.io/kube-controller-manager:v1.21.3
	k8s.gcr.io/kube-proxy:v1.21.3
	k8s.gcr.io/kube-scheduler:v1.21.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.4.1
	kubernetesui/dashboard:v2.1.0
	k8s.gcr.io/coredns/coredns:v1.8.0
	k8s.gcr.io/etcd:3.4.13-0
	kubernetesui/metrics-scraper:v1.0.4
	
	-- /stdout --
	I0811 01:51:17.065103 1781805 cache_images.go:74] Images are preloaded, skipping loading
	I0811 01:51:17.065160 1781805 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}}
	I0811 01:51:17.204838 1781805 cni.go:93] Creating CNI manager for "cilium"
	I0811 01:51:17.204898 1781805 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0811 01:51:17.204926 1781805 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cilium-20210811011758-1387367 NodeName:cilium-20210811011758-1387367 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/
lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0811 01:51:17.205118 1781805 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "cilium-20210811011758-1387367"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0811 01:51:17.205237 1781805 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=cilium-20210811011758-1387367 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:cilium-20210811011758-1387367 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:}
	I0811 01:51:17.205315 1781805 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0811 01:51:17.213218 1781805 binaries.go:44] Found k8s binaries, skipping transfer
	I0811 01:51:17.213279 1781805 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0811 01:51:17.229343 1781805 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I0811 01:51:17.250614 1781805 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0811 01:51:17.279078 1781805 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2072 bytes)
	I0811 01:51:17.297570 1781805 ssh_runner.go:149] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0811 01:51:17.304375 1781805 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0811 01:51:17.326459 1781805 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/cilium-20210811011758-1387367 for IP: 192.168.58.2
	I0811 01:51:17.326559 1781805 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key
	I0811 01:51:17.326602 1781805 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key
	I0811 01:51:17.326674 1781805 certs.go:294] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/cilium-20210811011758-1387367/client.key
	I0811 01:51:17.326702 1781805 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/cilium-20210811011758-1387367/client.crt with IP's: []
	I0811 01:51:17.509038 1781805 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/cilium-20210811011758-1387367/client.crt ...
	I0811 01:51:17.513127 1781805 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/cilium-20210811011758-1387367/client.crt: {Name:mke6cab4cf36dd4c7d3bcd00defd322dd5e7e894 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 01:51:17.513440 1781805 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/cilium-20210811011758-1387367/client.key ...
	I0811 01:51:17.513478 1781805 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/cilium-20210811011758-1387367/client.key: {Name:mk0d38c9330acdf859dd3c760c21276c3a952c8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 01:51:17.513637 1781805 certs.go:294] generating minikube signed cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/cilium-20210811011758-1387367/apiserver.key.cee25041
	I0811 01:51:17.513663 1781805 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/cilium-20210811011758-1387367/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0811 01:51:18.221960 1781805 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/cilium-20210811011758-1387367/apiserver.crt.cee25041 ...
	I0811 01:51:18.222033 1781805 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/cilium-20210811011758-1387367/apiserver.crt.cee25041: {Name:mk67b7fe8c6fdfaa76961e54753d9d37cc212db7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 01:51:18.222302 1781805 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/cilium-20210811011758-1387367/apiserver.key.cee25041 ...
	I0811 01:51:18.222335 1781805 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/cilium-20210811011758-1387367/apiserver.key.cee25041: {Name:mk9f604a15a896a07d06462f516ee55dc4e1ff99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 01:51:18.222481 1781805 certs.go:305] copying /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/cilium-20210811011758-1387367/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/cilium-20210811011758-1387367/apiserver.crt
	I0811 01:51:18.222566 1781805 certs.go:309] copying /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/cilium-20210811011758-1387367/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/cilium-20210811011758-1387367/apiserver.key
	I0811 01:51:18.222646 1781805 certs.go:294] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/cilium-20210811011758-1387367/proxy-client.key
	I0811 01:51:18.222671 1781805 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/cilium-20210811011758-1387367/proxy-client.crt with IP's: []
	I0811 01:51:18.612569 1781805 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/cilium-20210811011758-1387367/proxy-client.crt ...
	I0811 01:51:18.617089 1781805 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/cilium-20210811011758-1387367/proxy-client.crt: {Name:mk12590b2b581850db7585b4e7b669ffaae88750 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 01:51:18.617339 1781805 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/cilium-20210811011758-1387367/proxy-client.key ...
	I0811 01:51:18.617370 1781805 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/cilium-20210811011758-1387367/proxy-client.key: {Name:mkcf62353c694e0964687da3ace439cc0fad3006 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 01:51:18.617614 1781805 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/1387367.pem (1338 bytes)
	W0811 01:51:18.617675 1781805 certs.go:369] ignoring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/1387367_empty.pem, impossibly tiny 0 bytes
	I0811 01:51:18.617698 1781805 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem (1675 bytes)
	I0811 01:51:18.617771 1781805 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem (1082 bytes)
	I0811 01:51:18.617818 1781805 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem (1123 bytes)
	I0811 01:51:18.617874 1781805 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem (1679 bytes)
	I0811 01:51:18.617942 1781805 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/13873672.pem (1708 bytes)
	I0811 01:51:18.619085 1781805 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/cilium-20210811011758-1387367/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0811 01:51:18.641345 1781805 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/cilium-20210811011758-1387367/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0811 01:51:18.664787 1781805 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/cilium-20210811011758-1387367/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0811 01:51:18.689416 1781805 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/cilium-20210811011758-1387367/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0811 01:51:18.706779 1781805 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0811 01:51:18.735419 1781805 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0811 01:51:18.757949 1781805 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0811 01:51:18.789302 1781805 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0811 01:51:18.805544 1781805 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/13873672.pem --> /usr/share/ca-certificates/13873672.pem (1708 bytes)
	I0811 01:51:18.821169 1781805 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0811 01:51:18.846125 1781805 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/1387367.pem --> /usr/share/ca-certificates/1387367.pem (1338 bytes)
	I0811 01:51:18.873944 1781805 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0811 01:51:18.893087 1781805 ssh_runner.go:149] Run: openssl version
	I0811 01:51:18.897853 1781805 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0811 01:51:18.910063 1781805 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0811 01:51:18.913373 1781805 certs.go:416] hashing: -rw-r--r-- 1 root root 1111 Aug 11 00:30 /usr/share/ca-certificates/minikubeCA.pem
	I0811 01:51:18.913452 1781805 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0811 01:51:18.921872 1781805 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0811 01:51:18.935747 1781805 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1387367.pem && ln -fs /usr/share/ca-certificates/1387367.pem /etc/ssl/certs/1387367.pem"
	I0811 01:51:18.945995 1781805 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1387367.pem
	I0811 01:51:18.949324 1781805 certs.go:416] hashing: -rw-r--r-- 1 root root 1338 Aug 11 00:46 /usr/share/ca-certificates/1387367.pem
	I0811 01:51:18.949404 1781805 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1387367.pem
	I0811 01:51:18.961546 1781805 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1387367.pem /etc/ssl/certs/51391683.0"
	I0811 01:51:18.977440 1781805 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13873672.pem && ln -fs /usr/share/ca-certificates/13873672.pem /etc/ssl/certs/13873672.pem"
	I0811 01:51:18.985987 1781805 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/13873672.pem
	I0811 01:51:18.993503 1781805 certs.go:416] hashing: -rw-r--r-- 1 root root 1708 Aug 11 00:46 /usr/share/ca-certificates/13873672.pem
	I0811 01:51:18.993595 1781805 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13873672.pem
	I0811 01:51:19.002965 1781805 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/13873672.pem /etc/ssl/certs/3ec20f2e.0"
	I0811 01:51:19.016124 1781805 kubeadm.go:390] StartCluster: {Name:cilium-20210811011758-1387367 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:cilium-20210811011758-1387367 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:
[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0811 01:51:19.016290 1781805 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0811 01:51:19.058651 1781805 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0811 01:51:19.066618 1781805 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0811 01:51:19.073000 1781805 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0811 01:51:19.073111 1781805 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0811 01:51:19.080730 1781805 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0811 01:51:19.080808 1781805 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0811 01:51:20.330206 1781805 out.go:204]   - Generating certificates and keys ...
	I0811 01:51:29.849679 1781805 out.go:204]   - Booting up control plane ...
	I0811 01:51:49.013866 1781805 out.go:204]   - Configuring RBAC rules ...
	I0811 01:51:49.835634 1781805 cni.go:93] Creating CNI manager for "cilium"
	I0811 01:51:49.838122 1781805 out.go:177] * Configuring Cilium (Container Networking Interface) ...
	I0811 01:51:49.838213 1781805 ssh_runner.go:149] Run: sudo /bin/bash -c "grep 'bpffs /sys/fs/bpf' /proc/mounts || sudo mount bpffs -t bpf /sys/fs/bpf"
	I0811 01:51:49.906795 1781805 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0811 01:51:49.906820 1781805 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (18465 bytes)
	I0811 01:51:49.944405 1781805 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0811 01:51:51.044322 1781805 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.099877654s)
	I0811 01:51:51.044359 1781805 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0811 01:51:51.044470 1781805 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 01:51:51.044533 1781805 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=877a5691753f15214a0c269ac69dcdc5a4d99fcd minikube.k8s.io/name=cilium-20210811011758-1387367 minikube.k8s.io/updated_at=2021_08_11T01_51_51_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 01:51:51.060803 1781805 ops.go:34] apiserver oom_adj: -16
	I0811 01:51:51.221855 1781805 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 01:51:51.828241 1781805 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 01:51:52.328915 1781805 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 01:51:52.828165 1781805 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 01:51:53.329206 1781805 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 01:51:53.828903 1781805 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 01:51:54.328801 1781805 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 01:51:54.828931 1781805 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 01:51:55.328200 1781805 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 01:51:55.828418 1781805 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 01:51:56.328810 1781805 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 01:51:56.828529 1781805 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 01:51:57.328480 1781805 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 01:51:57.828399 1781805 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 01:51:58.328587 1781805 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 01:51:58.829154 1781805 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 01:51:59.329122 1781805 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 01:51:59.828214 1781805 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 01:52:00.328765 1781805 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 01:52:00.828694 1781805 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 01:52:01.328173 1781805 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 01:52:01.828883 1781805 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 01:52:02.328164 1781805 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 01:52:02.520827 1781805 kubeadm.go:985] duration metric: took 11.476405581s to wait for elevateKubeSystemPrivileges.
	I0811 01:52:02.520857 1781805 kubeadm.go:392] StartCluster complete in 43.504741054s
	I0811 01:52:02.520882 1781805 settings.go:142] acquiring lock: {Name:mk6e7f1e95cc0d18801bf31166529399345d1e40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 01:52:02.521079 1781805 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	I0811 01:52:02.522477 1781805 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig: {Name:mka174137207b71bb699e0c641682c96161f87c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 01:52:03.054936 1781805 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "cilium-20210811011758-1387367" rescaled to 1
	I0811 01:52:03.054989 1781805 start.go:226] Will wait 5m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0811 01:52:03.058583 1781805 out.go:177] * Verifying Kubernetes components...
	I0811 01:52:03.058650 1781805 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0811 01:52:03.055115 1781805 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0811 01:52:03.055317 1781805 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0811 01:52:03.059005 1781805 addons.go:59] Setting storage-provisioner=true in profile "cilium-20210811011758-1387367"
	I0811 01:52:03.059021 1781805 addons.go:135] Setting addon storage-provisioner=true in "cilium-20210811011758-1387367"
	W0811 01:52:03.059031 1781805 addons.go:147] addon storage-provisioner should already be in state true
	I0811 01:52:03.059047 1781805 addons.go:59] Setting default-storageclass=true in profile "cilium-20210811011758-1387367"
	I0811 01:52:03.059057 1781805 host.go:66] Checking if "cilium-20210811011758-1387367" exists ...
	I0811 01:52:03.059064 1781805 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cilium-20210811011758-1387367"
	I0811 01:52:03.059419 1781805 cli_runner.go:115] Run: docker container inspect cilium-20210811011758-1387367 --format={{.State.Status}}
	I0811 01:52:03.059590 1781805 cli_runner.go:115] Run: docker container inspect cilium-20210811011758-1387367 --format={{.State.Status}}
	I0811 01:52:03.103291 1781805 node_ready.go:35] waiting up to 5m0s for node "cilium-20210811011758-1387367" to be "Ready" ...
	I0811 01:52:03.111680 1781805 node_ready.go:49] node "cilium-20210811011758-1387367" has status "Ready":"True"
	I0811 01:52:03.111734 1781805 node_ready.go:38] duration metric: took 8.400621ms waiting for node "cilium-20210811011758-1387367" to be "Ready" ...
	I0811 01:52:03.111757 1781805 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0811 01:52:03.139534 1781805 pod_ready.go:78] waiting up to 5m0s for pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace to be "Ready" ...
	I0811 01:52:03.221886 1781805 addons.go:135] Setting addon default-storageclass=true in "cilium-20210811011758-1387367"
	W0811 01:52:03.221916 1781805 addons.go:147] addon default-storageclass should already be in state true
	I0811 01:52:03.221940 1781805 host.go:66] Checking if "cilium-20210811011758-1387367" exists ...
	I0811 01:52:03.222610 1781805 cli_runner.go:115] Run: docker container inspect cilium-20210811011758-1387367 --format={{.State.Status}}
	I0811 01:52:03.248806 1781805 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0811 01:52:03.248917 1781805 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0811 01:52:03.248927 1781805 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0811 01:52:03.248991 1781805 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210811011758-1387367
	I0811 01:52:03.296839 1781805 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0811 01:52:03.325643 1781805 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0811 01:52:03.325667 1781805 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0811 01:52:03.325733 1781805 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210811011758-1387367
	I0811 01:52:03.377472 1781805 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50480 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/cilium-20210811011758-1387367/id_rsa Username:docker}
	I0811 01:52:03.413684 1781805 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50480 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/cilium-20210811011758-1387367/id_rsa Username:docker}
	I0811 01:52:03.672622 1781805 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0811 01:52:03.938376 1781805 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0811 01:52:05.176790 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:52:05.251503 1781805 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.954634307s)
	I0811 01:52:05.251535 1781805 start.go:736] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS
	I0811 01:52:05.925850 1781805 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.253195521s)
	I0811 01:52:05.931775 1781805 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.993364186s)
	I0811 01:52:05.935804 1781805 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0811 01:52:05.937666 1781805 addons.go:344] enableAddons completed in 2.880578097s
	I0811 01:52:07.673215 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:52:09.675014 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:52:12.171544 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:52:14.177860 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:52:16.199625 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:52:18.670836 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:52:20.671103 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:52:22.671729 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:52:25.171455 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:52:27.172796 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:52:29.671553 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:52:32.172949 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:52:34.671475 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:52:37.172161 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:52:39.175705 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:52:41.178090 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:52:43.671526 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:52:45.673951 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:52:48.187557 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:52:50.671985 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:52:52.672187 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:52:55.171263 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:52:57.272941 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:52:59.671437 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:53:01.678351 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:53:04.171720 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:53:06.172029 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:53:08.174921 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:53:10.671578 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:53:13.171732 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:53:15.173291 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:53:17.174039 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:53:19.671076 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:53:21.671268 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:53:23.677847 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:53:26.173296 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:53:28.173413 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:53:30.676338 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:53:33.171175 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:53:35.174959 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:53:37.671057 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:53:39.672013 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:53:42.188901 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:53:44.219815 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:53:46.671738 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:53:49.171057 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:53:51.171114 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:53:53.172501 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:53:55.679876 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:53:58.184225 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:54:00.672439 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:54:02.679145 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:54:05.176342 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:54:07.671351 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:54:09.672326 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:54:12.171907 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:54:14.176598 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:54:16.672157 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:54:18.676004 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:54:20.679573 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:54:23.171506 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:54:25.172379 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:54:27.172941 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:54:29.175090 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:54:31.671645 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:54:33.676813 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:54:36.175239 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:54:38.671002 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:54:41.172560 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:54:43.671584 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:54:45.672549 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:54:48.171565 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:54:50.670751 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:54:52.670834 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:54:54.678223 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:54:57.171470 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:54:59.671553 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:55:01.671675 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:55:03.672493 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:55:05.686401 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:55:08.172349 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:55:10.671775 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:55:13.171763 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:55:15.671746 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:55:17.678011 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:55:20.173351 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:55:22.671541 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:55:24.673292 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:55:27.172162 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:55:29.174484 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:55:31.671084 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:55:34.179439 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:55:36.669999 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:55:38.673497 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:55:41.171850 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:55:43.680049 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:55:46.172423 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:55:48.670908 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:55:51.171907 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:55:53.171987 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:55:55.671172 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:55:57.672194 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:55:59.678965 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:56:02.190109 1781805 pod_ready.go:102] pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace has status "Ready":"False"
	I0811 01:56:03.176719 1781805 pod_ready.go:81] duration metric: took 4m0.037032666s waiting for pod "cilium-operator-99d899fb5-pmzjb" in "kube-system" namespace to be "Ready" ...
	E0811 01:56:03.176746 1781805 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0811 01:56:03.176755 1781805 pod_ready.go:78] waiting up to 5m0s for pod "cilium-wdjk8" in "kube-system" namespace to be "Ready" ...
	I0811 01:56:05.191941 1781805 pod_ready.go:102] pod "cilium-wdjk8" in "kube-system" namespace has status "Ready":"False"
	I0811 01:56:07.688629 1781805 pod_ready.go:102] pod "cilium-wdjk8" in "kube-system" namespace has status "Ready":"False"
	I0811 01:56:09.810103 1781805 pod_ready.go:102] pod "cilium-wdjk8" in "kube-system" namespace has status "Ready":"False"
	I0811 01:56:12.187785 1781805 pod_ready.go:102] pod "cilium-wdjk8" in "kube-system" namespace has status "Ready":"False"
	I0811 01:56:14.189234 1781805 pod_ready.go:102] pod "cilium-wdjk8" in "kube-system" namespace has status "Ready":"False"
	I0811 01:56:16.687651 1781805 pod_ready.go:102] pod "cilium-wdjk8" in "kube-system" namespace has status "Ready":"False"
	I0811 01:56:18.689214 1781805 pod_ready.go:102] pod "cilium-wdjk8" in "kube-system" namespace has status "Ready":"False"
	I0811 01:56:20.693469 1781805 pod_ready.go:102] pod "cilium-wdjk8" in "kube-system" namespace has status "Ready":"False"
	I0811 01:56:23.192175 1781805 pod_ready.go:102] pod "cilium-wdjk8" in "kube-system" namespace has status "Ready":"False"
	I0811 01:56:25.691822 1781805 pod_ready.go:102] pod "cilium-wdjk8" in "kube-system" namespace has status "Ready":"False"
	I0811 01:56:27.695557 1781805 pod_ready.go:102] pod "cilium-wdjk8" in "kube-system" namespace has status "Ready":"False"
	I0811 01:56:30.188179 1781805 pod_ready.go:102] pod "cilium-wdjk8" in "kube-system" namespace has status "Ready":"False"
	I0811 01:56:32.189398 1781805 pod_ready.go:102] pod "cilium-wdjk8" in "kube-system" namespace has status "Ready":"False"
	I0811 01:56:34.190849 1781805 pod_ready.go:102] pod "cilium-wdjk8" in "kube-system" namespace has status "Ready":"False"
	I0811 01:56:36.689758 1781805 pod_ready.go:102] pod "cilium-wdjk8" in "kube-system" namespace has status "Ready":"False"
	I0811 01:56:39.190220 1781805 pod_ready.go:102] pod "cilium-wdjk8" in "kube-system" namespace has status "Ready":"False"
	I0811 01:56:41.689132 1781805 pod_ready.go:102] pod "cilium-wdjk8" in "kube-system" namespace has status "Ready":"False"
	I0811 01:56:43.690913 1781805 pod_ready.go:102] pod "cilium-wdjk8" in "kube-system" namespace has status "Ready":"False"
	I0811 01:56:45.692565 1781805 pod_ready.go:102] pod "cilium-wdjk8" in "kube-system" namespace has status "Ready":"False"
	I0811 01:56:48.188729 1781805 pod_ready.go:102] pod "cilium-wdjk8" in "kube-system" namespace has status "Ready":"False"
	I0811 01:56:50.274691 1781805 pod_ready.go:102] pod "cilium-wdjk8" in "kube-system" namespace has status "Ready":"False"
	I0811 01:56:52.687878 1781805 pod_ready.go:102] pod "cilium-wdjk8" in "kube-system" namespace has status "Ready":"False"
	I0811 01:56:54.691341 1781805 pod_ready.go:102] pod "cilium-wdjk8" in "kube-system" namespace has status "Ready":"False"
	I0811 01:56:57.188598 1781805 pod_ready.go:102] pod "cilium-wdjk8" in "kube-system" namespace has status "Ready":"False"
	I0811 01:56:59.194861 1781805 pod_ready.go:102] pod "cilium-wdjk8" in "kube-system" namespace has status "Ready":"False"
	I0811 01:57:01.687524 1781805 pod_ready.go:102] pod "cilium-wdjk8" in "kube-system" namespace has status "Ready":"False"
	I0811 01:57:03.687918 1781805 pod_ready.go:102] pod "cilium-wdjk8" in "kube-system" namespace has status "Ready":"False"
	I0811 01:57:05.695456 1781805 pod_ready.go:102] pod "cilium-wdjk8" in "kube-system" namespace has status "Ready":"False"
	I0811 01:57:08.189724 1781805 pod_ready.go:102] pod "cilium-wdjk8" in "kube-system" namespace has status "Ready":"False"
	I0811 01:57:10.688059 1781805 pod_ready.go:102] pod "cilium-wdjk8" in "kube-system" namespace has status "Ready":"False"
	I0811 01:57:12.688140 1781805 pod_ready.go:102] pod "cilium-wdjk8" in "kube-system" namespace has status "Ready":"False"
	I0811 01:57:15.188029 1781805 pod_ready.go:102] pod "cilium-wdjk8" in "kube-system" namespace has status "Ready":"False"
	I0811 01:57:17.190520 1781805 pod_ready.go:102] pod "cilium-wdjk8" in "kube-system" namespace has status "Ready":"False"
	I0811 01:57:19.197922 1781805 pod_ready.go:102] pod "cilium-wdjk8" in "kube-system" namespace has status "Ready":"False"
	I0811 01:57:21.691236 1781805 pod_ready.go:102] pod "cilium-wdjk8" in "kube-system" namespace has status "Ready":"False"
	I0811 01:57:24.188127 1781805 pod_ready.go:102] pod "cilium-wdjk8" in "kube-system" namespace has status "Ready":"False"
	I0811 01:57:26.188633 1781805 pod_ready.go:102] pod "cilium-wdjk8" in "kube-system" namespace has status "Ready":"False"
	I0811 01:57:28.687912 1781805 pod_ready.go:102] pod "cilium-wdjk8" in "kube-system" namespace has status "Ready":"False"
	I0811 01:57:30.757186 1781805 pod_ready.go:102] pod "cilium-wdjk8" in "kube-system" namespace has status "Ready":"False"
	I0811 01:57:33.188159 1781805 pod_ready.go:102] pod "cilium-wdjk8" in "kube-system" namespace has status "Ready":"False"
	I0811 01:57:35.188777 1781805 pod_ready.go:102] pod "cilium-wdjk8" in "kube-system" namespace has status "Ready":"False"
	I0811 01:57:37.687245 1781805 pod_ready.go:102] pod "cilium-wdjk8" in "kube-system" namespace has status "Ready":"False"
	I0811 01:57:39.691246 1781805 pod_ready.go:102] pod "cilium-wdjk8" in "kube-system" namespace has status "Ready":"False"
	I0811 01:57:42.188488 1781805 pod_ready.go:102] pod "cilium-wdjk8" in "kube-system" namespace has status "Ready":"False"
	I0811 01:57:44.687829 1781805 pod_ready.go:102] pod "cilium-wdjk8" in "kube-system" namespace has status "Ready":"False"
	I0811 01:57:46.688223 1781805 pod_ready.go:102] pod "cilium-wdjk8" in "kube-system" namespace has status "Ready":"False"
	I0811 01:57:49.188469 1781805 pod_ready.go:102] pod "cilium-wdjk8" in "kube-system" namespace has status "Ready":"False"
	I0811 01:57:51.691116 1781805 pod_ready.go:102] pod "cilium-wdjk8" in "kube-system" namespace has status "Ready":"False"
	I0811 01:57:54.189931 1781805 pod_ready.go:102] pod "cilium-wdjk8" in "kube-system" namespace has status "Ready":"False"
	I0811 01:57:56.688347 1781805 pod_ready.go:102] pod "cilium-wdjk8" in "kube-system" namespace has status "Ready":"False"

                                                
                                                
** /stderr **
net_test.go:100: failed start: signal: killed
--- FAIL: TestNetworkPlugins/group/cilium/Start (419.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (83.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p calico-20210811011758-1387367 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=docker
E0811 01:52:31.556707 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210811011523-1387367/client.crt: no such file or directory
E0811 01:52:41.080313 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/client.crt: no such file or directory
E0811 01:52:59.239993 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210811011523-1387367/client.crt: no such file or directory
E0811 01:53:04.813100 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210811004603-1387367/client.crt: no such file or directory
net_test.go:98: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p calico-20210811011758-1387367 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=docker: exit status 80 (1m23.175173109s)

                                                
                                                
-- stdout --
	* [calico-20210811011758-1387367] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=12230
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	* Using the docker driver based on user configuration
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	* Starting control plane node calico-20210811011758-1387367 in cluster calico-20210811011758-1387367
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	
	* Preparing Kubernetes v1.21.3 on Docker 20.10.7 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring Calico (Container Networking Interface) ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring Calico (Container Networking Interface) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0811 01:52:12.372437 1790800 out.go:298] Setting OutFile to fd 1 ...
	I0811 01:52:12.372580 1790800 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0811 01:52:12.376016 1790800 out.go:311] Setting ErrFile to fd 2...
	I0811 01:52:12.376042 1790800 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0811 01:52:12.376201 1790800 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/bin
	I0811 01:52:12.376537 1790800 out.go:305] Setting JSON to false
	I0811 01:52:12.377728 1790800 start.go:111] hostinfo: {"hostname":"ip-172-31-31-251","uptime":41679,"bootTime":1628605053,"procs":269,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0811 01:52:12.377838 1790800 start.go:121] virtualization:  
	I0811 01:52:12.381217 1790800 out.go:177] * [calico-20210811011758-1387367] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	I0811 01:52:12.381365 1790800 notify.go:169] Checking for updates...
	I0811 01:52:12.383500 1790800 out.go:177]   - MINIKUBE_LOCATION=12230
	I0811 01:52:12.385662 1790800 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	I0811 01:52:12.387618 1790800 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube
	I0811 01:52:12.389514 1790800 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0811 01:52:12.390101 1790800 driver.go:335] Setting default libvirt URI to qemu:///system
	I0811 01:52:12.487967 1790800 docker.go:132] docker version: linux-20.10.8
	I0811 01:52:12.488069 1790800 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0811 01:52:12.624789 1790800 info.go:263] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:46 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:39 SystemTime:2021-08-11 01:52:12.558506678 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226263040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0811 01:52:12.624912 1790800 docker.go:244] overlay module found
	I0811 01:52:12.627797 1790800 out.go:177] * Using the docker driver based on user configuration
	I0811 01:52:12.627825 1790800 start.go:278] selected driver: docker
	I0811 01:52:12.627832 1790800 start.go:751] validating driver "docker" against <nil>
	I0811 01:52:12.627852 1790800 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0811 01:52:12.627910 1790800 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0811 01:52:12.627928 1790800 out.go:242] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0811 01:52:12.630160 1790800 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0811 01:52:12.630496 1790800 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0811 01:52:12.753223 1790800 info.go:263] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:46 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:39 SystemTime:2021-08-11 01:52:12.676940414 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226263040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0811 01:52:12.753354 1790800 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0811 01:52:12.753509 1790800 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0811 01:52:12.753533 1790800 cni.go:93] Creating CNI manager for "calico"
	I0811 01:52:12.753540 1790800 start_flags.go:272] Found "Calico" CNI - setting NetworkPlugin=cni
	I0811 01:52:12.753550 1790800 start_flags.go:277] config:
	{Name:calico-20210811011758-1387367 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:calico-20210811011758-1387367 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0811 01:52:12.756914 1790800 out.go:177] * Starting control plane node calico-20210811011758-1387367 in cluster calico-20210811011758-1387367
	I0811 01:52:12.756957 1790800 cache.go:117] Beginning downloading kic base image for docker with docker
	I0811 01:52:12.759780 1790800 out.go:177] * Pulling base image ...
	I0811 01:52:12.759815 1790800 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime docker
	I0811 01:52:12.759855 1790800 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-docker-overlay2-arm64.tar.lz4
	I0811 01:52:12.759863 1790800 cache.go:56] Caching tarball of preloaded images
	I0811 01:52:12.760041 1790800 preload.go:173] Found /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0811 01:52:12.760059 1790800 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on docker
	I0811 01:52:12.760160 1790800 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210811011758-1387367/config.json ...
	I0811 01:52:12.760206 1790800 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210811011758-1387367/config.json: {Name:mk8b03f10b0fd95dbfa07ea6036dcd6c706ab7b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 01:52:12.760362 1790800 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	I0811 01:52:12.842166 1790800 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0811 01:52:12.842191 1790800 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0811 01:52:12.842206 1790800 cache.go:205] Successfully downloaded all kic artifacts
	I0811 01:52:12.842248 1790800 start.go:313] acquiring machines lock for calico-20210811011758-1387367: {Name:mkbd62a3c919985646f0b71480036c08c01a2c71 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0811 01:52:12.842389 1790800 start.go:317] acquired machines lock for "calico-20210811011758-1387367" in 119.696µs
	I0811 01:52:12.842425 1790800 start.go:89] Provisioning new machine with config: &{Name:calico-20210811011758-1387367 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:calico-20210811011758-1387367 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0811 01:52:12.842509 1790800 start.go:126] createHost starting for "" (driver="docker")
	I0811 01:52:12.847128 1790800 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0811 01:52:12.847389 1790800 start.go:160] libmachine.API.Create for "calico-20210811011758-1387367" (driver="docker")
	I0811 01:52:12.847421 1790800 client.go:168] LocalClient.Create starting
	I0811 01:52:12.847490 1790800 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem
	I0811 01:52:12.847529 1790800 main.go:130] libmachine: Decoding PEM data...
	I0811 01:52:12.847554 1790800 main.go:130] libmachine: Parsing certificate...
	I0811 01:52:12.847674 1790800 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem
	I0811 01:52:12.847697 1790800 main.go:130] libmachine: Decoding PEM data...
	I0811 01:52:12.847709 1790800 main.go:130] libmachine: Parsing certificate...
	I0811 01:52:12.848095 1790800 cli_runner.go:115] Run: docker network inspect calico-20210811011758-1387367 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0811 01:52:12.894988 1790800 cli_runner.go:162] docker network inspect calico-20210811011758-1387367 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0811 01:52:12.895062 1790800 network_create.go:255] running [docker network inspect calico-20210811011758-1387367] to gather additional debugging logs...
	I0811 01:52:12.895083 1790800 cli_runner.go:115] Run: docker network inspect calico-20210811011758-1387367
	W0811 01:52:12.935673 1790800 cli_runner.go:162] docker network inspect calico-20210811011758-1387367 returned with exit code 1
	I0811 01:52:12.935704 1790800 network_create.go:258] error running [docker network inspect calico-20210811011758-1387367]: docker network inspect calico-20210811011758-1387367: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-20210811011758-1387367
	I0811 01:52:12.935720 1790800 network_create.go:260] output of [docker network inspect calico-20210811011758-1387367]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-20210811011758-1387367
	
	** /stderr **
	I0811 01:52:12.935774 1790800 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0811 01:52:12.978767 1790800 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0x400012f108] misses:0}
	I0811 01:52:12.978819 1790800 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0811 01:52:12.978836 1790800 network_create.go:106] attempt to create docker network calico-20210811011758-1387367 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0811 01:52:12.978893 1790800 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20210811011758-1387367
	I0811 01:52:13.068514 1790800 network_create.go:90] docker network calico-20210811011758-1387367 192.168.49.0/24 created
	I0811 01:52:13.068542 1790800 kic.go:106] calculated static IP "192.168.49.2" for the "calico-20210811011758-1387367" container
	I0811 01:52:13.068613 1790800 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0811 01:52:13.130788 1790800 cli_runner.go:115] Run: docker volume create calico-20210811011758-1387367 --label name.minikube.sigs.k8s.io=calico-20210811011758-1387367 --label created_by.minikube.sigs.k8s.io=true
	I0811 01:52:13.186780 1790800 oci.go:102] Successfully created a docker volume calico-20210811011758-1387367
	I0811 01:52:13.186855 1790800 cli_runner.go:115] Run: docker run --rm --name calico-20210811011758-1387367-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20210811011758-1387367 --entrypoint /usr/bin/test -v calico-20210811011758-1387367:/var gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -d /var/lib
	I0811 01:52:14.074027 1790800 oci.go:106] Successfully prepared a docker volume calico-20210811011758-1387367
	W0811 01:52:14.074072 1790800 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0811 01:52:14.074080 1790800 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0811 01:52:14.074144 1790800 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0811 01:52:14.074504 1790800 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime docker
	I0811 01:52:14.074527 1790800 kic.go:179] Starting extracting preloaded images to volume ...
	I0811 01:52:14.074583 1790800 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v calico-20210811011758-1387367:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir
	I0811 01:52:14.263984 1790800 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20210811011758-1387367 --name calico-20210811011758-1387367 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20210811011758-1387367 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20210811011758-1387367 --network calico-20210811011758-1387367 --ip 192.168.49.2 --volume calico-20210811011758-1387367:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79
	I0811 01:52:14.964708 1790800 cli_runner.go:115] Run: docker container inspect calico-20210811011758-1387367 --format={{.State.Running}}
	I0811 01:52:15.021185 1790800 cli_runner.go:115] Run: docker container inspect calico-20210811011758-1387367 --format={{.State.Status}}
	I0811 01:52:15.076950 1790800 cli_runner.go:115] Run: docker exec calico-20210811011758-1387367 stat /var/lib/dpkg/alternatives/iptables
	I0811 01:52:15.215555 1790800 oci.go:278] the created container "calico-20210811011758-1387367" has a running status.
	I0811 01:52:15.215582 1790800 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/calico-20210811011758-1387367/id_rsa...
	I0811 01:52:16.386346 1790800 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/calico-20210811011758-1387367/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0811 01:52:16.525167 1790800 cli_runner.go:115] Run: docker container inspect calico-20210811011758-1387367 --format={{.State.Status}}
	I0811 01:52:16.586101 1790800 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0811 01:52:16.586120 1790800 kic_runner.go:115] Args: [docker exec --privileged calico-20210811011758-1387367 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0811 01:52:25.923253 1790800 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v calico-20210811011758-1387367:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir: (11.848622417s)
	I0811 01:52:25.923282 1790800 kic.go:188] duration metric: took 11.848753 seconds to extract preloaded images to volume
	I0811 01:52:25.923375 1790800 cli_runner.go:115] Run: docker container inspect calico-20210811011758-1387367 --format={{.State.Status}}
	I0811 01:52:25.981096 1790800 machine.go:88] provisioning docker machine ...
	I0811 01:52:25.981135 1790800 ubuntu.go:169] provisioning hostname "calico-20210811011758-1387367"
	I0811 01:52:25.981201 1790800 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210811011758-1387367
	I0811 01:52:26.036003 1790800 main.go:130] libmachine: Using SSH client type: native
	I0811 01:52:26.036215 1790800 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil>  [] 0s} 127.0.0.1 50485 <nil> <nil>}
	I0811 01:52:26.036237 1790800 main.go:130] libmachine: About to run SSH command:
	sudo hostname calico-20210811011758-1387367 && echo "calico-20210811011758-1387367" | sudo tee /etc/hostname
	I0811 01:52:26.187307 1790800 main.go:130] libmachine: SSH cmd err, output: <nil>: calico-20210811011758-1387367
	
	I0811 01:52:26.187422 1790800 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210811011758-1387367
	I0811 01:52:26.235059 1790800 main.go:130] libmachine: Using SSH client type: native
	I0811 01:52:26.235235 1790800 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil>  [] 0s} 127.0.0.1 50485 <nil> <nil>}
	I0811 01:52:26.235262 1790800 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-20210811011758-1387367' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-20210811011758-1387367/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-20210811011758-1387367' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0811 01:52:26.367874 1790800 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0811 01:52:26.367934 1790800 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/k
ey.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube}
	I0811 01:52:26.367967 1790800 ubuntu.go:177] setting up certificates
	I0811 01:52:26.367991 1790800 provision.go:83] configureAuth start
	I0811 01:52:26.368064 1790800 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20210811011758-1387367
	I0811 01:52:26.411700 1790800 provision.go:137] copyHostCerts
	I0811 01:52:26.411769 1790800 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem, removing ...
	I0811 01:52:26.411778 1790800 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem
	I0811 01:52:26.411847 1790800 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.pem (1082 bytes)
	I0811 01:52:26.411928 1790800 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem, removing ...
	I0811 01:52:26.411935 1790800 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem
	I0811 01:52:26.411956 1790800 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cert.pem (1123 bytes)
	I0811 01:52:26.411996 1790800 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem, removing ...
	I0811 01:52:26.412001 1790800 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem
	I0811 01:52:26.412020 1790800 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/key.pem (1679 bytes)
	I0811 01:52:26.412063 1790800 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem org=jenkins.calico-20210811011758-1387367 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube calico-20210811011758-1387367]
	I0811 01:52:27.027747 1790800 provision.go:171] copyRemoteCerts
	I0811 01:52:27.027816 1790800 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0811 01:52:27.027885 1790800 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210811011758-1387367
	I0811 01:52:27.062957 1790800 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50485 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/calico-20210811011758-1387367/id_rsa Username:docker}
	I0811 01:52:27.148310 1790800 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0811 01:52:27.168946 1790800 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server.pem --> /etc/docker/server.pem (1261 bytes)
	I0811 01:52:27.188876 1790800 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0811 01:52:27.206252 1790800 provision.go:86] duration metric: configureAuth took 838.237179ms
	I0811 01:52:27.206274 1790800 ubuntu.go:193] setting minikube options for container-runtime
	I0811 01:52:27.206478 1790800 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210811011758-1387367
	I0811 01:52:27.239608 1790800 main.go:130] libmachine: Using SSH client type: native
	I0811 01:52:27.239782 1790800 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil>  [] 0s} 127.0.0.1 50485 <nil> <nil>}
	I0811 01:52:27.239800 1790800 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0811 01:52:27.359043 1790800 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0811 01:52:27.359061 1790800 ubuntu.go:71] root file system type: overlay
	I0811 01:52:27.359219 1790800 provision.go:308] Updating docker unit: /lib/systemd/system/docker.service ...
	I0811 01:52:27.359279 1790800 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210811011758-1387367
	I0811 01:52:27.402295 1790800 main.go:130] libmachine: Using SSH client type: native
	I0811 01:52:27.402464 1790800 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil>  [] 0s} 127.0.0.1 50485 <nil> <nil>}
	I0811 01:52:27.402564 1790800 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0811 01:52:27.552384 1790800 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0811 01:52:27.552459 1790800 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210811011758-1387367
	I0811 01:52:27.606196 1790800 main.go:130] libmachine: Using SSH client type: native
	I0811 01:52:27.606367 1790800 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil>  [] 0s} 127.0.0.1 50485 <nil> <nil>}
	I0811 01:52:27.606387 1790800 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0811 01:52:28.588887 1790800 main.go:130] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-06-02 11:55:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2021-08-11 01:52:27.542559924 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	+BindsTo=containerd.service
	 After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0811 01:52:28.588957 1790800 machine.go:91] provisioned docker machine in 2.607836807s
	I0811 01:52:28.588980 1790800 client.go:171] LocalClient.Create took 15.741549207s
	I0811 01:52:28.589061 1790800 start.go:168] duration metric: libmachine.API.Create for "calico-20210811011758-1387367" took 15.741672653s
	I0811 01:52:28.589087 1790800 start.go:267] post-start starting for "calico-20210811011758-1387367" (driver="docker")
	I0811 01:52:28.589111 1790800 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0811 01:52:28.589213 1790800 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0811 01:52:28.589301 1790800 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210811011758-1387367
	I0811 01:52:28.629564 1790800 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50485 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/calico-20210811011758-1387367/id_rsa Username:docker}
	I0811 01:52:28.712320 1790800 ssh_runner.go:149] Run: cat /etc/os-release
	I0811 01:52:28.715045 1790800 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0811 01:52:28.715069 1790800 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0811 01:52:28.715080 1790800 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0811 01:52:28.715087 1790800 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0811 01:52:28.715096 1790800 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/addons for local assets ...
	I0811 01:52:28.715144 1790800 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files for local assets ...
	I0811 01:52:28.715226 1790800 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/13873672.pem -> 13873672.pem in /etc/ssl/certs
	I0811 01:52:28.715317 1790800 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0811 01:52:28.721876 1790800 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/13873672.pem --> /etc/ssl/certs/13873672.pem (1708 bytes)
	I0811 01:52:28.739135 1790800 start.go:270] post-start completed in 150.014989ms
	I0811 01:52:28.739502 1790800 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20210811011758-1387367
	I0811 01:52:28.773370 1790800 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210811011758-1387367/config.json ...
	I0811 01:52:28.773622 1790800 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0811 01:52:28.773676 1790800 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210811011758-1387367
	I0811 01:52:28.811795 1790800 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50485 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/calico-20210811011758-1387367/id_rsa Username:docker}
	I0811 01:52:28.906296 1790800 out.go:177] 
	W0811 01:52:28.906491 1790800 out.go:242] X Docker is nearly out of disk space, which may cause deployments to fail! (85% of capacity)
	X Docker is nearly out of disk space, which may cause deployments to fail! (85% of capacity)
	W0811 01:52:28.906615 1790800 out.go:242] * Suggestion: 
	
	    Try one or more of the following to free up space on the device:
	    
	    1. Run "docker system prune" to remove unused Docker data (optionally with "-a")
	    2. Increase the storage allocated to Docker for Desktop by clicking on:
	    Docker icon > Preferences > Resources > Disk Image Size
	    3. Run "minikube ssh -- docker system prune" if using the Docker container runtime
	* Suggestion: 
	
	    Try one or more of the following to free up space on the device:
	    
	    1. Run "docker system prune" to remove unused Docker data (optionally with "-a")
	    2. Increase the storage allocated to Docker for Desktop by clicking on:
	    Docker icon > Preferences > Resources > Disk Image Size
	    3. Run "minikube ssh -- docker system prune" if using the Docker container runtime
	W0811 01:52:28.906682 1790800 out.go:242] * Related issue: https://github.com/kubernetes/minikube/issues/9024
	* Related issue: https://github.com/kubernetes/minikube/issues/9024
	I0811 01:52:28.909073 1790800 out.go:177] 
	I0811 01:52:28.909144 1790800 start.go:129] duration metric: createHost completed in 16.066615323s
	I0811 01:52:28.909165 1790800 start.go:80] releasing machines lock for "calico-20210811011758-1387367", held for 16.066761078s
	I0811 01:52:28.909275 1790800 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20210811011758-1387367
	I0811 01:52:28.946424 1790800 ssh_runner.go:149] Run: systemctl --version
	I0811 01:52:28.946478 1790800 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210811011758-1387367
	I0811 01:52:28.946491 1790800 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0811 01:52:28.946546 1790800 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210811011758-1387367
	I0811 01:52:29.002262 1790800 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50485 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/calico-20210811011758-1387367/id_rsa Username:docker}
	I0811 01:52:29.014304 1790800 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50485 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/calico-20210811011758-1387367/id_rsa Username:docker}
	I0811 01:52:29.266766 1790800 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0811 01:52:29.276377 1790800 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0811 01:52:29.287223 1790800 cruntime.go:249] skipping containerd shutdown because we are bound to it
	I0811 01:52:29.287285 1790800 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0811 01:52:29.296981 1790800 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0811 01:52:29.309694 1790800 ssh_runner.go:149] Run: sudo systemctl unmask docker.service
	I0811 01:52:29.400029 1790800 ssh_runner.go:149] Run: sudo systemctl enable docker.socket
	I0811 01:52:29.525909 1790800 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0811 01:52:29.537383 1790800 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0811 01:52:29.638960 1790800 ssh_runner.go:149] Run: sudo systemctl start docker
	I0811 01:52:29.649465 1790800 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0811 01:52:29.746305 1790800 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0811 01:52:29.828832 1790800 out.go:204] * Preparing Kubernetes v1.21.3 on Docker 20.10.7 ...
	I0811 01:52:29.828959 1790800 cli_runner.go:115] Run: docker network inspect calico-20210811011758-1387367 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0811 01:52:29.871663 1790800 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0811 01:52:29.876846 1790800 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0811 01:52:29.889410 1790800 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime docker
	I0811 01:52:29.889496 1790800 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0811 01:52:29.941165 1790800 docker.go:535] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.21.3
	k8s.gcr.io/kube-controller-manager:v1.21.3
	k8s.gcr.io/kube-proxy:v1.21.3
	k8s.gcr.io/kube-scheduler:v1.21.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.4.1
	kubernetesui/dashboard:v2.1.0
	k8s.gcr.io/coredns/coredns:v1.8.0
	k8s.gcr.io/etcd:3.4.13-0
	kubernetesui/metrics-scraper:v1.0.4
	
	-- /stdout --
	I0811 01:52:29.941186 1790800 docker.go:466] Images already preloaded, skipping extraction
	I0811 01:52:29.941243 1790800 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0811 01:52:29.984688 1790800 docker.go:535] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.21.3
	k8s.gcr.io/kube-controller-manager:v1.21.3
	k8s.gcr.io/kube-proxy:v1.21.3
	k8s.gcr.io/kube-scheduler:v1.21.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.4.1
	kubernetesui/dashboard:v2.1.0
	k8s.gcr.io/coredns/coredns:v1.8.0
	k8s.gcr.io/etcd:3.4.13-0
	kubernetesui/metrics-scraper:v1.0.4
	
	-- /stdout --
	I0811 01:52:29.984717 1790800 cache_images.go:74] Images are preloaded, skipping loading
	I0811 01:52:29.984778 1790800 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}}
	I0811 01:52:30.279749 1790800 cni.go:93] Creating CNI manager for "calico"
	I0811 01:52:30.279778 1790800 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0811 01:52:30.279792 1790800 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-20210811011758-1387367 NodeName:calico-20210811011758-1387367 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/
lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0811 01:52:30.279962 1790800 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "calico-20210811011758-1387367"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0811 01:52:30.280050 1790800 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=calico-20210811011758-1387367 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:calico-20210811011758-1387367 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:}
	I0811 01:52:30.280114 1790800 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0811 01:52:30.289743 1790800 binaries.go:44] Found k8s binaries, skipping transfer
	I0811 01:52:30.289823 1790800 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0811 01:52:30.296949 1790800 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I0811 01:52:30.310743 1790800 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0811 01:52:30.324574 1790800 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2072 bytes)
	I0811 01:52:30.338265 1790800 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0811 01:52:30.341603 1790800 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0811 01:52:30.350469 1790800 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210811011758-1387367 for IP: 192.168.49.2
	I0811 01:52:30.350567 1790800 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key
	I0811 01:52:30.350588 1790800 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key
	I0811 01:52:30.350638 1790800 certs.go:294] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210811011758-1387367/client.key
	I0811 01:52:30.350649 1790800 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210811011758-1387367/client.crt with IP's: []
	I0811 01:52:31.180953 1790800 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210811011758-1387367/client.crt ...
	I0811 01:52:31.180980 1790800 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210811011758-1387367/client.crt: {Name:mk77a88bd7b93a6a9a6e4ae64a0979dca0af874a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 01:52:31.181206 1790800 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210811011758-1387367/client.key ...
	I0811 01:52:31.181222 1790800 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210811011758-1387367/client.key: {Name:mk26d50f353dc63a32db0f921379ab8acdeb79bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 01:52:31.181318 1790800 certs.go:294] generating minikube signed cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210811011758-1387367/apiserver.key.dd3b5fb2
	I0811 01:52:31.181330 1790800 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210811011758-1387367/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0811 01:52:31.830433 1790800 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210811011758-1387367/apiserver.crt.dd3b5fb2 ...
	I0811 01:52:31.830470 1790800 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210811011758-1387367/apiserver.crt.dd3b5fb2: {Name:mk607c10c7b7e0368c4c11b2737381e1b4444a2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 01:52:31.830688 1790800 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210811011758-1387367/apiserver.key.dd3b5fb2 ...
	I0811 01:52:31.830708 1790800 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210811011758-1387367/apiserver.key.dd3b5fb2: {Name:mk9edf86c3b3647d7da4f2924112c69907ccd80f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 01:52:31.830798 1790800 certs.go:305] copying /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210811011758-1387367/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210811011758-1387367/apiserver.crt
	I0811 01:52:31.830859 1790800 certs.go:309] copying /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210811011758-1387367/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210811011758-1387367/apiserver.key
	I0811 01:52:31.830907 1790800 certs.go:294] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210811011758-1387367/proxy-client.key
	I0811 01:52:31.830920 1790800 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210811011758-1387367/proxy-client.crt with IP's: []
	I0811 01:52:32.433525 1790800 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210811011758-1387367/proxy-client.crt ...
	I0811 01:52:32.433560 1790800 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210811011758-1387367/proxy-client.crt: {Name:mk483da967630415cc0331f14074711aff2268b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 01:52:32.433776 1790800 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210811011758-1387367/proxy-client.key ...
	I0811 01:52:32.433791 1790800 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210811011758-1387367/proxy-client.key: {Name:mk32e18e4f69aa3f1262b517298792a8f9e3f123 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 01:52:32.433974 1790800 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/1387367.pem (1338 bytes)
	W0811 01:52:32.434017 1790800 certs.go:369] ignoring /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/1387367_empty.pem, impossibly tiny 0 bytes
	I0811 01:52:32.434030 1790800 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca-key.pem (1675 bytes)
	I0811 01:52:32.434056 1790800 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/ca.pem (1082 bytes)
	I0811 01:52:32.434085 1790800 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/cert.pem (1123 bytes)
	I0811 01:52:32.434113 1790800 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/key.pem (1679 bytes)
	I0811 01:52:32.434159 1790800 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/13873672.pem (1708 bytes)
	I0811 01:52:32.435278 1790800 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210811011758-1387367/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0811 01:52:32.452874 1790800 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210811011758-1387367/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0811 01:52:32.470380 1790800 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210811011758-1387367/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0811 01:52:32.487862 1790800 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/calico-20210811011758-1387367/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0811 01:52:32.505714 1790800 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0811 01:52:32.522851 1790800 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0811 01:52:32.544743 1790800 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0811 01:52:32.567217 1790800 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0811 01:52:32.590919 1790800 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/ssl/certs/13873672.pem --> /usr/share/ca-certificates/13873672.pem (1708 bytes)
	I0811 01:52:32.613664 1790800 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0811 01:52:32.634100 1790800 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/certs/1387367.pem --> /usr/share/ca-certificates/1387367.pem (1338 bytes)
	I0811 01:52:32.667624 1790800 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0811 01:52:32.687881 1790800 ssh_runner.go:149] Run: openssl version
	I0811 01:52:32.701878 1790800 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13873672.pem && ln -fs /usr/share/ca-certificates/13873672.pem /etc/ssl/certs/13873672.pem"
	I0811 01:52:32.711700 1790800 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/13873672.pem
	I0811 01:52:32.715333 1790800 certs.go:416] hashing: -rw-r--r-- 1 root root 1708 Aug 11 00:46 /usr/share/ca-certificates/13873672.pem
	I0811 01:52:32.715404 1790800 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13873672.pem
	I0811 01:52:32.720820 1790800 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/13873672.pem /etc/ssl/certs/3ec20f2e.0"
	I0811 01:52:32.728122 1790800 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0811 01:52:32.735592 1790800 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0811 01:52:32.738920 1790800 certs.go:416] hashing: -rw-r--r-- 1 root root 1111 Aug 11 00:30 /usr/share/ca-certificates/minikubeCA.pem
	I0811 01:52:32.738981 1790800 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0811 01:52:32.745083 1790800 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0811 01:52:32.752289 1790800 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1387367.pem && ln -fs /usr/share/ca-certificates/1387367.pem /etc/ssl/certs/1387367.pem"
	I0811 01:52:32.761372 1790800 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1387367.pem
	I0811 01:52:32.764656 1790800 certs.go:416] hashing: -rw-r--r-- 1 root root 1338 Aug 11 00:46 /usr/share/ca-certificates/1387367.pem
	I0811 01:52:32.764713 1790800 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1387367.pem
	I0811 01:52:32.771187 1790800 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1387367.pem /etc/ssl/certs/51391683.0"
	I0811 01:52:32.778915 1790800 kubeadm.go:390] StartCluster: {Name:calico-20210811011758-1387367 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:calico-20210811011758-1387367 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:
[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0811 01:52:32.779041 1790800 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0811 01:52:32.836469 1790800 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0811 01:52:32.847717 1790800 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0811 01:52:32.859363 1790800 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0811 01:52:32.859443 1790800 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0811 01:52:32.867532 1790800 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0811 01:52:32.867588 1790800 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0811 01:52:34.104774 1790800 out.go:204]   - Generating certificates and keys ...
	I0811 01:52:39.507493 1790800 out.go:204]   - Booting up control plane ...
	I0811 01:52:56.118463 1790800 out.go:204]   - Configuring RBAC rules ...
	I0811 01:52:56.548294 1790800 cni.go:93] Creating CNI manager for "calico"
	I0811 01:52:56.551255 1790800 out.go:177] * Configuring Calico (Container Networking Interface) ...
	I0811 01:52:56.551343 1790800 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0811 01:52:56.551357 1790800 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (22469 bytes)
	I0811 01:52:56.573042 1790800 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	W0811 01:52:57.559668 1790800 out.go:242] ! initialization failed, will try again: apply cni: cni apply: cmd: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml output: -- stdout --
	configmap/calico-config created
	
	-- /stdout --
	** stderr ** 
	error: error validating "/var/tmp/minikube/cni.yaml": error validating data: [ValidationError(CustomResourceDefinition.spec): unknown field "version" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec, ValidationError(CustomResourceDefinition.spec): missing required field "versions" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec]; if you choose to ignore these errors, turn validation off with --validate=false
	
	** /stderr **: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	configmap/calico-config created
	
	stderr:
	error: error validating "/var/tmp/minikube/cni.yaml": error validating data: [ValidationError(CustomResourceDefinition.spec): unknown field "version" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec, ValidationError(CustomResourceDefinition.spec): missing required field "versions" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec]; if you choose to ignore these errors, turn validation off with --validate=false
	
	! initialization failed, will try again: apply cni: cni apply: cmd: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml output: -- stdout --
	configmap/calico-config created
	
	-- /stdout --
	** stderr ** 
	error: error validating "/var/tmp/minikube/cni.yaml": error validating data: [ValidationError(CustomResourceDefinition.spec): unknown field "version" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec, ValidationError(CustomResourceDefinition.spec): missing required field "versions" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec]; if you choose to ignore these errors, turn validation off with --validate=false
	
	** /stderr **: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	configmap/calico-config created
	
	stderr:
	error: error validating "/var/tmp/minikube/cni.yaml": error validating data: [ValidationError(CustomResourceDefinition.spec): unknown field "version" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec, ValidationError(CustomResourceDefinition.spec): missing required field "versions" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec]; if you choose to ignore these errors, turn validation off with --validate=false
	
	I0811 01:52:57.559706 1790800 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0811 01:53:13.751710 1790800 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force": (16.191960551s)
	I0811 01:53:13.751784 1790800 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0811 01:53:13.762699 1790800 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0811 01:53:13.803724 1790800 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0811 01:53:13.803800 1790800 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0811 01:53:13.811063 1790800 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0811 01:53:13.811105 1790800 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0811 01:53:14.800477 1790800 out.go:204]   - Generating certificates and keys ...
	I0811 01:53:16.928098 1790800 out.go:204]   - Booting up control plane ...
	I0811 01:53:33.504639 1790800 out.go:204]   - Configuring RBAC rules ...
	I0811 01:53:33.959926 1790800 cni.go:93] Creating CNI manager for "calico"
	I0811 01:53:33.962620 1790800 out.go:177] * Configuring Calico (Container Networking Interface) ...
	I0811 01:53:33.962690 1790800 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0811 01:53:33.962704 1790800 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (22469 bytes)
	I0811 01:53:33.986979 1790800 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0811 01:53:34.507957 1790800 kubeadm.go:392] StartCluster complete in 1m1.729045206s
	I0811 01:53:34.508048 1790800 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0811 01:53:34.556591 1790800 logs.go:270] 1 containers: [29b54be90378]
	I0811 01:53:34.556674 1790800 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0811 01:53:34.601987 1790800 logs.go:270] 1 containers: [4bcf8321bed8]
	I0811 01:53:34.602063 1790800 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0811 01:53:34.650294 1790800 logs.go:270] 0 containers: []
	W0811 01:53:34.650318 1790800 logs.go:272] No container was found matching "coredns"
	I0811 01:53:34.650376 1790800 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0811 01:53:34.698816 1790800 logs.go:270] 1 containers: [be6e042a2d8f]
	I0811 01:53:34.698887 1790800 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0811 01:53:34.745133 1790800 logs.go:270] 0 containers: []
	W0811 01:53:34.745200 1790800 logs.go:272] No container was found matching "kube-proxy"
	I0811 01:53:34.745261 1790800 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0811 01:53:34.785189 1790800 logs.go:270] 0 containers: []
	W0811 01:53:34.785210 1790800 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0811 01:53:34.785265 1790800 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0811 01:53:34.829120 1790800 logs.go:270] 0 containers: []
	W0811 01:53:34.829138 1790800 logs.go:272] No container was found matching "storage-provisioner"
	I0811 01:53:34.829210 1790800 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0811 01:53:34.876987 1790800 logs.go:270] 1 containers: [2ac4e19ab281]
	I0811 01:53:34.877054 1790800 logs.go:123] Gathering logs for kube-scheduler [be6e042a2d8f] ...
	I0811 01:53:34.877065 1790800 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 be6e042a2d8f"
	I0811 01:53:34.929278 1790800 logs.go:123] Gathering logs for Docker ...
	I0811 01:53:34.929307 1790800 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0811 01:53:34.948935 1790800 logs.go:123] Gathering logs for container status ...
	I0811 01:53:34.948964 1790800 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0811 01:53:34.989133 1790800 logs.go:123] Gathering logs for kubelet ...
	I0811 01:53:34.989161 1790800 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0811 01:53:35.104498 1790800 logs.go:123] Gathering logs for dmesg ...
	I0811 01:53:35.104532 1790800 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0811 01:53:35.122733 1790800 logs.go:123] Gathering logs for etcd [4bcf8321bed8] ...
	I0811 01:53:35.122760 1790800 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 4bcf8321bed8"
	I0811 01:53:35.170926 1790800 logs.go:123] Gathering logs for describe nodes ...
	I0811 01:53:35.171065 1790800 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0811 01:53:35.334098 1790800 logs.go:123] Gathering logs for kube-apiserver [29b54be90378] ...
	I0811 01:53:35.334125 1790800 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 29b54be90378"
	I0811 01:53:35.406264 1790800 logs.go:123] Gathering logs for kube-controller-manager [2ac4e19ab281] ...
	I0811 01:53:35.406298 1790800 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 2ac4e19ab281"
	W0811 01:53:35.458941 1790800 out.go:371] Error starting cluster: apply cni: cni apply: cmd: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml output: -- stdout --
	configmap/calico-config created
	
	-- /stdout --
	** stderr ** 
	error: error validating "/var/tmp/minikube/cni.yaml": error validating data: [ValidationError(CustomResourceDefinition.spec): unknown field "version" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec, ValidationError(CustomResourceDefinition.spec): missing required field "versions" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec]; if you choose to ignore these errors, turn validation off with --validate=false
	
	** /stderr **: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	configmap/calico-config created
	
	stderr:
	error: error validating "/var/tmp/minikube/cni.yaml": error validating data: [ValidationError(CustomResourceDefinition.spec): unknown field "version" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec, ValidationError(CustomResourceDefinition.spec): missing required field "versions" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec]; if you choose to ignore these errors, turn validation off with --validate=false
	W0811 01:53:35.458975 1790800 out.go:242] * 
	* 
	W0811 01:53:35.459137 1790800 out.go:242] X Error starting cluster: apply cni: cni apply: cmd: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml output: -- stdout --
	configmap/calico-config created
	
	-- /stdout --
	** stderr ** 
	error: error validating "/var/tmp/minikube/cni.yaml": error validating data: [ValidationError(CustomResourceDefinition.spec): unknown field "version" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec, ValidationError(CustomResourceDefinition.spec): missing required field "versions" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec]; if you choose to ignore these errors, turn validation off with --validate=false
	
	** /stderr **: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	configmap/calico-config created
	
	stderr:
	error: error validating "/var/tmp/minikube/cni.yaml": error validating data: [ValidationError(CustomResourceDefinition.spec): unknown field "version" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec, ValidationError(CustomResourceDefinition.spec): missing required field "versions" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec]; if you choose to ignore these errors, turn validation off with --validate=false
	
	X Error starting cluster: apply cni: cni apply: cmd: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml output: -- stdout --
	configmap/calico-config created
	
	-- /stdout --
	** stderr ** 
	error: error validating "/var/tmp/minikube/cni.yaml": error validating data: [ValidationError(CustomResourceDefinition.spec): unknown field "version" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec, ValidationError(CustomResourceDefinition.spec): missing required field "versions" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec]; if you choose to ignore these errors, turn validation off with --validate=false
	
	** /stderr **: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	configmap/calico-config created
	
	stderr:
	error: error validating "/var/tmp/minikube/cni.yaml": error validating data: [ValidationError(CustomResourceDefinition.spec): unknown field "version" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec, ValidationError(CustomResourceDefinition.spec): missing required field "versions" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec]; if you choose to ignore these errors, turn validation off with --validate=false
	
	W0811 01:53:35.459158 1790800 out.go:242] * 
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	W0811 01:53:35.462107 1790800 out.go:242] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                                              │
	│                                                                                                                                                            │
	│    * Please attach the following file to the GitHub issue:                                                                                                 │
	│    * - /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/logs/lastStart.txt    │
	│                                                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                                              │
	│                                                                                                                                                            │
	│    * Please attach the following file to the GitHub issue:                                                                                                 │
	│    * - /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/logs/lastStart.txt    │
	│                                                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0811 01:53:35.465979 1790800 out.go:177] 
	W0811 01:53:35.466158 1790800 out.go:242] X Exiting due to GUEST_START: apply cni: cni apply: cmd: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml output: -- stdout --
	configmap/calico-config created
	
	-- /stdout --
	** stderr ** 
	error: error validating "/var/tmp/minikube/cni.yaml": error validating data: [ValidationError(CustomResourceDefinition.spec): unknown field "version" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec, ValidationError(CustomResourceDefinition.spec): missing required field "versions" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec]; if you choose to ignore these errors, turn validation off with --validate=false
	
	** /stderr **: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	configmap/calico-config created
	
	stderr:
	error: error validating "/var/tmp/minikube/cni.yaml": error validating data: [ValidationError(CustomResourceDefinition.spec): unknown field "version" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec, ValidationError(CustomResourceDefinition.spec): missing required field "versions" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec]; if you choose to ignore these errors, turn validation off with --validate=false
	
	X Exiting due to GUEST_START: apply cni: cni apply: cmd: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml output: -- stdout --
	configmap/calico-config created
	
	-- /stdout --
	** stderr ** 
	error: error validating "/var/tmp/minikube/cni.yaml": error validating data: [ValidationError(CustomResourceDefinition.spec): unknown field "version" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec, ValidationError(CustomResourceDefinition.spec): missing required field "versions" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec]; if you choose to ignore these errors, turn validation off with --validate=false
	
	** /stderr **: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	configmap/calico-config created
	
	stderr:
	error: error validating "/var/tmp/minikube/cni.yaml": error validating data: [ValidationError(CustomResourceDefinition.spec): unknown field "version" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec, ValidationError(CustomResourceDefinition.spec): missing required field "versions" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec]; if you choose to ignore these errors, turn validation off with --validate=false
	
	W0811 01:53:35.466183 1790800 out.go:242] * 
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	W0811 01:53:35.468781 1790800 out.go:242] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                                              │
	│                                                                                                                                                            │
	│    * Please attach the following file to the GitHub issue:                                                                                                 │
	│    * - /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/logs/lastStart.txt    │
	│                                                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                                              │
	│                                                                                                                                                            │
	│    * Please attach the following file to the GitHub issue:                                                                                                 │
	│    * - /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/logs/lastStart.txt    │
	│                                                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0811 01:53:35.475582 1790800 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:100: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (83.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (900.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context kindnet-20210811011758-1387367 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:340: "netcat-66fbc655d5-d9t7k" [d277fdd6-b88c-4b44-b14e-e9d9aeb30d46] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:340: "netcat-66fbc655d5-d9t7k" [d277fdd6-b88c-4b44-b14e-e9d9aeb30d46] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/NetCatPod
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/NetCatPod
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/NetCatPod
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/NetCatPod
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/NetCatPod
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0811 01:58:08.809043 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/default-k8s-different-port-20210811014415-1387367/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0811 01:58:24.130370 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/false-20210811011758-1387367/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0811 01:59:12.387412 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/no-preload-20210811012751-1387367/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0811 01:59:20.203500 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/auto-20210811011758-1387367/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0811 01:59:27.855411 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210811004603-1387367/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0811 01:59:47.890169 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/auto-20210811011758-1387367/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0811 01:59:51.942609 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/custom-weave-20210811011758-1387367/client.crt: no such file or directory
E0811 01:59:51.947898 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/custom-weave-20210811011758-1387367/client.crt: no such file or directory
E0811 01:59:51.958116 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/custom-weave-20210811011758-1387367/client.crt: no such file or directory
E0811 01:59:51.978334 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/custom-weave-20210811011758-1387367/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0811 01:59:52.018693 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/custom-weave-20210811011758-1387367/client.crt: no such file or directory
E0811 01:59:52.099020 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/custom-weave-20210811011758-1387367/client.crt: no such file or directory
E0811 01:59:52.259381 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/custom-weave-20210811011758-1387367/client.crt: no such file or directory
E0811 01:59:52.580025 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/custom-weave-20210811011758-1387367/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0811 01:59:53.220626 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/custom-weave-20210811011758-1387367/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0811 01:59:54.500822 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/custom-weave-20210811011758-1387367/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0811 01:59:57.061595 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/custom-weave-20210811011758-1387367/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0811 02:00:02.181748 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/custom-weave-20210811011758-1387367/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0811 02:00:12.421969 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/custom-weave-20210811011758-1387367/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0811 02:00:24.962188 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/default-k8s-different-port-20210811014415-1387367/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0811 02:00:32.903007 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/custom-weave-20210811011758-1387367/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0811 02:00:40.287523 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/false-20210811011758-1387367/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0811 02:00:52.649898 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/default-k8s-different-port-20210811014415-1387367/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0811 02:01:07.757290 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/enable-default-cni-20210811011758-1387367/client.crt: no such file or directory
E0811 02:01:07.762573 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/enable-default-cni-20210811011758-1387367/client.crt: no such file or directory
E0811 02:01:07.772782 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/enable-default-cni-20210811011758-1387367/client.crt: no such file or directory
E0811 02:01:07.793058 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/enable-default-cni-20210811011758-1387367/client.crt: no such file or directory
E0811 02:01:07.833334 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/enable-default-cni-20210811011758-1387367/client.crt: no such file or directory
E0811 02:01:07.913571 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/enable-default-cni-20210811011758-1387367/client.crt: no such file or directory
E0811 02:01:07.970822 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/false-20210811011758-1387367/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0811 02:01:08.074216 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/enable-default-cni-20210811011758-1387367/client.crt: no such file or directory
E0811 02:01:08.394769 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/enable-default-cni-20210811011758-1387367/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0811 02:01:09.035614 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/enable-default-cni-20210811011758-1387367/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0811 02:01:10.315970 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/enable-default-cni-20210811011758-1387367/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0811 02:01:12.876944 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/enable-default-cni-20210811011758-1387367/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0811 02:01:13.863177 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/custom-weave-20210811011758-1387367/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0811 02:01:17.997131 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/enable-default-cni-20210811011758-1387367/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0811 02:01:28.237486 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/enable-default-cni-20210811011758-1387367/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0811 02:01:48.718608 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/enable-default-cni-20210811011758-1387367/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0811 02:02:29.678819 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/enable-default-cni-20210811011758-1387367/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0811 02:02:31.556899 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210811011523-1387367/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0811 02:02:35.783467 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/custom-weave-20210811011758-1387367/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0811 02:02:41.079696 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0811 02:03:04.808174 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210811004603-1387367/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0811 02:03:51.599308 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/enable-default-cni-20210811011758-1387367/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0811 02:03:54.600437 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210811011523-1387367/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0811 02:04:12.387420 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/no-preload-20210811012751-1387367/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0811 02:04:20.203500 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/auto-20210811011758-1387367/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0811 02:04:51.942462 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/custom-weave-20210811011758-1387367/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0811 02:05:19.624118 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/custom-weave-20210811011758-1387367/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0811 02:05:24.962324 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/default-k8s-different-port-20210811014415-1387367/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0811 02:05:40.286759 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/false-20210811011758-1387367/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0811 02:06:07.758220 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/enable-default-cni-20210811011758-1387367/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0811 02:06:35.439782 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/enable-default-cni-20210811011758-1387367/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0811 02:07:31.557158 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210811011523-1387367/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0811 02:07:41.079687 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0811 02:08:04.808090 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210811004603-1387367/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0811 02:09:12.387393 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/no-preload-20210811012751-1387367/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0811 02:09:20.203556 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/auto-20210811011758-1387367/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0811 02:09:51.942551 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/custom-weave-20210811011758-1387367/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0811 02:10:24.961472 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/default-k8s-different-port-20210811014415-1387367/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0811 02:10:40.286753 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/false-20210811011758-1387367/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0811 02:10:43.250773 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/auto-20210811011758-1387367/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0811 02:10:44.127642 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0811 02:11:07.758147 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/enable-default-cni-20210811011758-1387367/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0811 02:11:48.010305 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/default-k8s-different-port-20210811014415-1387367/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0811 02:12:03.331025 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/false-20210811011758-1387367/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0811 02:12:15.431275 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/no-preload-20210811012751-1387367/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0811 02:12:31.557285 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210811011523-1387367/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
E0811 02:12:41.079733 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/client.crt: no such file or directory
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
helpers_test.go:325: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: context deadline exceeded
net_test.go:145: ***** TestNetworkPlugins/group/kindnet/NetCatPod: pod "app=netcat" failed to start within 15m0s: timed out waiting for the condition ****
net_test.go:145: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p kindnet-20210811011758-1387367 -n kindnet-20210811011758-1387367
net_test.go:145: TestNetworkPlugins/group/kindnet/NetCatPod: showing logs for failed pods as of 2021-08-11 02:12:52.340201693 +0000 UTC m=+6197.128437042
net_test.go:145: (dbg) Run:  kubectl --context kindnet-20210811011758-1387367 describe po netcat-66fbc655d5-d9t7k -n default
net_test.go:145: (dbg) Non-zero exit: kubectl --context kindnet-20210811011758-1387367 describe po netcat-66fbc655d5-d9t7k -n default: context deadline exceeded (1.691µs)
net_test.go:145: kubectl --context kindnet-20210811011758-1387367 describe po netcat-66fbc655d5-d9t7k -n default: context deadline exceeded
net_test.go:145: (dbg) Run:  kubectl --context kindnet-20210811011758-1387367 logs netcat-66fbc655d5-d9t7k -n default
net_test.go:145: (dbg) Non-zero exit: kubectl --context kindnet-20210811011758-1387367 logs netcat-66fbc655d5-d9t7k -n default: context deadline exceeded (500ns)
net_test.go:145: kubectl --context kindnet-20210811011758-1387367 logs netcat-66fbc655d5-d9t7k -n default: context deadline exceeded
net_test.go:146: failed waiting for netcat pod: app=netcat within 15m0s: timed out waiting for the condition
--- FAIL: TestNetworkPlugins/group/kindnet/NetCatPod (900.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (0s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-20210811011758-1387367 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=docker
net_test.go:98: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p bridge-20210811011758-1387367 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=docker: context deadline exceeded (828ns)
net_test.go:100: failed start: context deadline exceeded
--- FAIL: TestNetworkPlugins/group/bridge/Start (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (0s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p kubenet-20210811011758-1387367 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker  --container-runtime=docker
net_test.go:98: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubenet-20210811011758-1387367 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker  --container-runtime=docker: context deadline exceeded (886ns)
net_test.go:100: failed start: context deadline exceeded
--- FAIL: TestNetworkPlugins/group/kubenet/Start (0.00s)

                                                
                                    

Test pass (207/246)

Order passed test Duration
3 TestDownloadOnly/v1.14.0/json-events 9.69
4 TestDownloadOnly/v1.14.0/preload-exists 0
8 TestDownloadOnly/v1.14.0/LogsDuration 0.08
10 TestDownloadOnly/v1.21.3/json-events 11.29
11 TestDownloadOnly/v1.21.3/preload-exists 0
15 TestDownloadOnly/v1.21.3/LogsDuration 0.08
17 TestDownloadOnly/v1.22.0-rc.0/json-events 10.79
18 TestDownloadOnly/v1.22.0-rc.0/preload-exists 0
22 TestDownloadOnly/v1.22.0-rc.0/LogsDuration 0.08
23 TestDownloadOnly/DeleteAll 0.35
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.21
26 TestOffline 80.85
30 TestAddons/parallel/Ingress 38.9
31 TestAddons/parallel/MetricsServer 5.72
34 TestAddons/parallel/CSI 39.94
35 TestAddons/parallel/GCPAuth 15.84
36 TestCertOptions 46.09
37 TestDockerFlags 45.22
38 TestForceSystemdFlag 47.45
39 TestForceSystemdEnv 47.89
44 TestErrorSpam/setup 44.89
45 TestErrorSpam/start 0.93
46 TestErrorSpam/status 1.1
47 TestErrorSpam/pause 1.33
48 TestErrorSpam/unpause 1.35
49 TestErrorSpam/stop 3.69
52 TestFunctional/serial/CopySyncFile 0
53 TestFunctional/serial/StartWithProxy 67.54
54 TestFunctional/serial/AuditLog 0
55 TestFunctional/serial/SoftStart 6.65
56 TestFunctional/serial/KubeContext 0.07
57 TestFunctional/serial/KubectlGetPods 0.35
60 TestFunctional/serial/CacheCmd/cache/add_remote 5.99
61 TestFunctional/serial/CacheCmd/cache/add_local 1.06
62 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.07
63 TestFunctional/serial/CacheCmd/cache/list 0.1
64 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
65 TestFunctional/serial/CacheCmd/cache/cache_reload 2.26
66 TestFunctional/serial/CacheCmd/cache/delete 0.14
67 TestFunctional/serial/MinikubeKubectlCmd 0.16
68 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.15
69 TestFunctional/serial/ExtraConfig 32.54
70 TestFunctional/serial/ComponentHealth 0.11
71 TestFunctional/serial/LogsCmd 1.65
72 TestFunctional/serial/LogsFileCmd 1.65
74 TestFunctional/parallel/ConfigCmd 0.49
75 TestFunctional/parallel/DashboardCmd 5.38
76 TestFunctional/parallel/DryRun 0.66
77 TestFunctional/parallel/InternationalLanguage 0.23
78 TestFunctional/parallel/StatusCmd 1.15
81 TestFunctional/parallel/ServiceCmd 12.73
82 TestFunctional/parallel/AddonsCmd 0.26
83 TestFunctional/parallel/PersistentVolumeClaim 25.08
85 TestFunctional/parallel/SSHCmd 0.87
86 TestFunctional/parallel/CpCmd 0.73
88 TestFunctional/parallel/FileSync 0.35
89 TestFunctional/parallel/CertSync 1.91
91 TestFunctional/parallel/DockerEnv 1.29
93 TestFunctional/parallel/NodeLabels 0.08
94 TestFunctional/parallel/LoadImage 1.5
95 TestFunctional/parallel/RemoveImage 1.97
96 TestFunctional/parallel/LoadImageFromFile 1.3
97 TestFunctional/parallel/BuildImage 2.68
98 TestFunctional/parallel/ListImages 0.28
99 TestFunctional/parallel/NonActiveRuntimeDisabled 0.28
101 TestFunctional/parallel/Version/short 0.08
102 TestFunctional/parallel/Version/components 1.45
104 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
106 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
107 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
111 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
112 TestFunctional/parallel/ProfileCmd/profile_not_create 0.41
113 TestFunctional/parallel/ProfileCmd/profile_list 0.39
114 TestFunctional/parallel/ProfileCmd/profile_json_output 0.38
115 TestFunctional/parallel/MountCmd/any-port 6.09
116 TestFunctional/parallel/MountCmd/specific-port 2.22
117 TestFunctional/parallel/UpdateContextCmd/no_changes 0.17
118 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.13
119 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.14
120 TestFunctional/delete_busybox_image 0.07
121 TestFunctional/delete_my-image_image 0.04
122 TestFunctional/delete_minikube_cached_images 0.03
126 TestJSONOutput/start/Audit 0
128 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
129 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
131 TestJSONOutput/pause/Audit 0
133 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
134 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
136 TestJSONOutput/unpause/Audit 0
138 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
139 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
141 TestJSONOutput/stop/Audit 0
143 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
144 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
145 TestErrorJSONOutput 0.3
147 TestKicCustomNetwork/create_custom_network 44.55
148 TestKicCustomNetwork/use_default_bridge_network 48.44
149 TestKicExistingNetwork 48.03
150 TestMainNoArgs 0.06
153 TestMultiNode/serial/FreshStart2Nodes 115.2
156 TestMultiNode/serial/AddNode 43.78
157 TestMultiNode/serial/ProfileList 0.31
158 TestMultiNode/serial/CopyFile 2.46
159 TestMultiNode/serial/StopNode 2.42
160 TestMultiNode/serial/StartAfterStop 25.27
161 TestMultiNode/serial/RestartKeepsNodes 110.22
162 TestMultiNode/serial/DeleteNode 5.65
163 TestMultiNode/serial/StopMultiNode 12.28
164 TestMultiNode/serial/RestartMultiNode 89.67
165 TestMultiNode/serial/ValidateNameConflict 48.57
171 TestDebPackageInstall/install_arm64_debian:sid/minikube 0
172 TestDebPackageInstall/install_arm64_debian:sid/kvm2-driver 14.38
174 TestDebPackageInstall/install_arm64_debian:latest/minikube 0
175 TestDebPackageInstall/install_arm64_debian:latest/kvm2-driver 12.3
177 TestDebPackageInstall/install_arm64_debian:10/minikube 0
178 TestDebPackageInstall/install_arm64_debian:10/kvm2-driver 12.6
180 TestDebPackageInstall/install_arm64_debian:9/minikube 0
181 TestDebPackageInstall/install_arm64_debian:9/kvm2-driver 9.78
183 TestDebPackageInstall/install_arm64_ubuntu:latest/minikube 0
184 TestDebPackageInstall/install_arm64_ubuntu:latest/kvm2-driver 15
186 TestDebPackageInstall/install_arm64_ubuntu:20.10/minikube 0
187 TestDebPackageInstall/install_arm64_ubuntu:20.10/kvm2-driver 13.98
189 TestDebPackageInstall/install_arm64_ubuntu:20.04/minikube 0
190 TestDebPackageInstall/install_arm64_ubuntu:20.04/kvm2-driver 15.3
192 TestDebPackageInstall/install_arm64_ubuntu:18.04/minikube 0
193 TestDebPackageInstall/install_arm64_ubuntu:18.04/kvm2-driver 12.49
199 TestInsufficientStorage 16.07
200 TestRunningBinaryUpgrade 113.85
202 TestKubernetesUpgrade 137.36
212 TestStartStop/group/old-k8s-version/serial/FirstStart 127.76
214 TestPause/serial/Start 59.88
215 TestStartStop/group/old-k8s-version/serial/DeployApp 354.75
216 TestPause/serial/SecondStartNoReconfiguration 5.84
217 TestPause/serial/Pause 0.69
218 TestPause/serial/VerifyStatus 0.32
219 TestPause/serial/Unpause 0.59
220 TestPause/serial/PauseAgain 3.96
221 TestPause/serial/DeletePaused 2.45
222 TestPause/serial/VerifyDeletedResources 0.37
234 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.94
235 TestStartStop/group/old-k8s-version/serial/Stop 10.94
236 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
238 TestStoppedBinaryUpgrade/MinikubeLogs 1.5
240 TestStartStop/group/no-preload/serial/FirstStart 80.04
241 TestStartStop/group/no-preload/serial/DeployApp 9.72
242 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.1
243 TestStartStop/group/no-preload/serial/Stop 11.49
244 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.24
245 TestStartStop/group/no-preload/serial/SecondStart 351.35
246 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 12.03
247 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.13
248 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.39
249 TestStartStop/group/no-preload/serial/Pause 3.54
251 TestStartStop/group/embed-certs/serial/FirstStart 73.53
252 TestStartStop/group/embed-certs/serial/DeployApp 8.5
254 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.42
255 TestStartStop/group/embed-certs/serial/Stop 11.22
256 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.23
257 TestStartStop/group/embed-certs/serial/SecondStart 392.18
258 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.02
259 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.35
260 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.36
261 TestStartStop/group/embed-certs/serial/Pause 3.28
263 TestStartStop/group/default-k8s-different-port/serial/FirstStart 69.26
264 TestStartStop/group/default-k8s-different-port/serial/DeployApp 9.89
265 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 1.31
266 TestStartStop/group/default-k8s-different-port/serial/Stop 11.36
267 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 0.23
268 TestStartStop/group/default-k8s-different-port/serial/SecondStart 362.66
270 TestStartStop/group/newest-cni/serial/FirstStart 66.49
271 TestStartStop/group/newest-cni/serial/DeployApp 0
272 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.16
273 TestStartStop/group/newest-cni/serial/Stop 11.2
274 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
275 TestStartStop/group/newest-cni/serial/SecondStart 26.35
276 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
277 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
278 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.43
279 TestStartStop/group/newest-cni/serial/Pause 3.04
280 TestNetworkPlugins/group/auto/Start 75.58
281 TestNetworkPlugins/group/auto/KubeletFlags 0.29
282 TestNetworkPlugins/group/auto/NetCatPod 10.67
283 TestNetworkPlugins/group/auto/DNS 0.21
284 TestNetworkPlugins/group/auto/Localhost 0.2
285 TestNetworkPlugins/group/auto/HairPin 5.24
286 TestNetworkPlugins/group/false/Start 61.13
287 TestNetworkPlugins/group/false/KubeletFlags 0.3
288 TestNetworkPlugins/group/false/NetCatPod 11.49
289 TestNetworkPlugins/group/false/DNS 0.22
290 TestNetworkPlugins/group/false/Localhost 0.21
291 TestNetworkPlugins/group/false/HairPin 5.2
293 TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop 8.04
294 TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop 5.13
295 TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages 0.77
296 TestStartStop/group/default-k8s-different-port/serial/Pause 4.89
298 TestNetworkPlugins/group/custom-weave/Start 72.31
299 TestNetworkPlugins/group/custom-weave/KubeletFlags 0.3
300 TestNetworkPlugins/group/custom-weave/NetCatPod 9.73
301 TestNetworkPlugins/group/enable-default-cni/Start 63.34
302 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.3
303 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.42
304 TestNetworkPlugins/group/enable-default-cni/DNS 0.27
305 TestNetworkPlugins/group/enable-default-cni/Localhost 0.23
306 TestNetworkPlugins/group/enable-default-cni/HairPin 0.2
307 TestNetworkPlugins/group/kindnet/Start 85.1
308 TestNetworkPlugins/group/kindnet/ControllerPod 5.04
309 TestNetworkPlugins/group/kindnet/KubeletFlags 0.35
x
+
TestDownloadOnly/v1.14.0/json-events (9.69s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-20210811002935-1387367 --force --alsologtostderr --kubernetes-version=v1.14.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-20210811002935-1387367 --force --alsologtostderr --kubernetes-version=v1.14.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (9.688469088s)
--- PASS: TestDownloadOnly/v1.14.0/json-events (9.69s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/preload-exists
--- PASS: TestDownloadOnly/v1.14.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-20210811002935-1387367
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-20210811002935-1387367: exit status 85 (76.687664ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/11 00:29:35
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.16.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0811 00:29:35.321712 1387373 out.go:298] Setting OutFile to fd 1 ...
	I0811 00:29:35.321880 1387373 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0811 00:29:35.321888 1387373 out.go:311] Setting ErrFile to fd 2...
	I0811 00:29:35.321891 1387373 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0811 00:29:35.322034 1387373 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/bin
	W0811 00:29:35.322179 1387373 root.go:291] Error reading config file at /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/config/config.json: no such file or directory
	I0811 00:29:35.322431 1387373 out.go:305] Setting JSON to true
	I0811 00:29:35.323272 1387373 start.go:111] hostinfo: {"hostname":"ip-172-31-31-251","uptime":36722,"bootTime":1628605053,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0811 00:29:35.323364 1387373 start.go:121] virtualization:  
	I0811 00:29:35.326493 1387373 notify.go:169] Checking for updates...
	I0811 00:29:35.329322 1387373 driver.go:335] Setting default libvirt URI to qemu:///system
	I0811 00:29:35.367762 1387373 docker.go:132] docker version: linux-20.10.8
	I0811 00:29:35.367869 1387373 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0811 00:29:35.484036 1387373 info.go:263] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:46 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:34 SystemTime:2021-08-11 00:29:35.421189644 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226263040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0811 00:29:35.484162 1387373 docker.go:244] overlay module found
	I0811 00:29:35.486753 1387373 start.go:278] selected driver: docker
	I0811 00:29:35.486769 1387373 start.go:751] validating driver "docker" against <nil>
	I0811 00:29:35.486902 1387373 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0811 00:29:35.569612 1387373 info.go:263] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:46 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:34 SystemTime:2021-08-11 00:29:35.513981062 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226263040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0811 00:29:35.569730 1387373 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0811 00:29:35.569991 1387373 start_flags.go:344] Using suggested 2200MB memory alloc based on sys=7845MB, container=7845MB
	I0811 00:29:35.570082 1387373 start_flags.go:679] Wait components to verify : map[apiserver:true system_pods:true]
	I0811 00:29:35.570095 1387373 cni.go:93] Creating CNI manager for ""
	I0811 00:29:35.570102 1387373 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0811 00:29:35.570108 1387373 start_flags.go:277] config:
	{Name:download-only-20210811002935-1387367 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:download-only-20210811002935-1387367 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0811 00:29:35.572448 1387373 cache.go:117] Beginning downloading kic base image for docker with docker
	I0811 00:29:35.574910 1387373 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime docker
	I0811 00:29:35.575019 1387373 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	I0811 00:29:35.629189 1387373 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0811 00:29:35.629216 1387373 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0811 00:29:35.661217 1387373 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.14.0-docker-overlay2-arm64.tar.lz4
	I0811 00:29:35.661250 1387373 cache.go:56] Caching tarball of preloaded images
	I0811 00:29:35.661541 1387373 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime docker
	I0811 00:29:35.664565 1387373 preload.go:237] getting checksum for preloaded-images-k8s-v11-v1.14.0-docker-overlay2-arm64.tar.lz4 ...
	I0811 00:29:35.784553 1387373 download.go:92] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.14.0-docker-overlay2-arm64.tar.lz4?checksum=md5:0eebed761a2dbdd2633a2aff7cfcbea6 -> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.14.0-docker-overlay2-arm64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20210811002935-1387367"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.14.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/json-events (11.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-20210811002935-1387367 --force --alsologtostderr --kubernetes-version=v1.21.3 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-20210811002935-1387367 --force --alsologtostderr --kubernetes-version=v1.21.3 --container-runtime=docker --driver=docker  --container-runtime=docker: (11.292497053s)
--- PASS: TestDownloadOnly/v1.21.3/json-events (11.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/preload-exists
--- PASS: TestDownloadOnly/v1.21.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-20210811002935-1387367
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-20210811002935-1387367: exit status 85 (82.228238ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/11 00:29:45
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.16.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0811 00:29:45.095607 1387453 out.go:298] Setting OutFile to fd 1 ...
	I0811 00:29:45.095714 1387453 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0811 00:29:45.095724 1387453 out.go:311] Setting ErrFile to fd 2...
	I0811 00:29:45.095727 1387453 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0811 00:29:45.095863 1387453 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/bin
	W0811 00:29:45.095981 1387453 root.go:291] Error reading config file at /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/config/config.json: no such file or directory
	I0811 00:29:45.096096 1387453 out.go:305] Setting JSON to true
	I0811 00:29:45.096937 1387453 start.go:111] hostinfo: {"hostname":"ip-172-31-31-251","uptime":36732,"bootTime":1628605053,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0811 00:29:45.097034 1387453 start.go:121] virtualization:  
	I0811 00:29:45.099968 1387453 notify.go:169] Checking for updates...
	W0811 00:29:45.103329 1387453 start.go:659] api.Load failed for download-only-20210811002935-1387367: filestore "download-only-20210811002935-1387367": Docker machine "download-only-20210811002935-1387367" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0811 00:29:45.103421 1387453 driver.go:335] Setting default libvirt URI to qemu:///system
	W0811 00:29:45.103451 1387453 start.go:659] api.Load failed for download-only-20210811002935-1387367: filestore "download-only-20210811002935-1387367": Docker machine "download-only-20210811002935-1387367" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0811 00:29:45.140513 1387453 docker.go:132] docker version: linux-20.10.8
	I0811 00:29:45.140629 1387453 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0811 00:29:45.243565 1387453 info.go:263] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:46 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:34 SystemTime:2021-08-11 00:29:45.186211825 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226263040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0811 00:29:45.243685 1387453 docker.go:244] overlay module found
	I0811 00:29:45.246221 1387453 start.go:278] selected driver: docker
	I0811 00:29:45.246244 1387453 start.go:751] validating driver "docker" against &{Name:download-only-20210811002935-1387367 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:download-only-20210811002935-1387367 Namespace:default APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0811 00:29:45.246431 1387453 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0811 00:29:45.329186 1387453 info.go:263] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:46 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:34 SystemTime:2021-08-11 00:29:45.272995225 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226263040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0811 00:29:45.329549 1387453 cni.go:93] Creating CNI manager for ""
	I0811 00:29:45.329563 1387453 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0811 00:29:45.329575 1387453 start_flags.go:277] config:
	{Name:download-only-20210811002935-1387367 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:download-only-20210811002935-1387367 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0811 00:29:45.332196 1387453 cache.go:117] Beginning downloading kic base image for docker with docker
	I0811 00:29:45.334745 1387453 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime docker
	I0811 00:29:45.334825 1387453 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	I0811 00:29:45.388336 1387453 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0811 00:29:45.388368 1387453 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0811 00:29:45.406292 1387453 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.21.3-docker-overlay2-arm64.tar.lz4
	I0811 00:29:45.406317 1387453 cache.go:56] Caching tarball of preloaded images
	I0811 00:29:45.406579 1387453 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime docker
	I0811 00:29:45.408965 1387453 preload.go:237] getting checksum for preloaded-images-k8s-v11-v1.21.3-docker-overlay2-arm64.tar.lz4 ...
	I0811 00:29:45.550796 1387453 download.go:92] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.21.3-docker-overlay2-arm64.tar.lz4?checksum=md5:52c0f874123a928e982c52c4805bc7f7 -> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-docker-overlay2-arm64.tar.lz4
	I0811 00:29:53.796719 1387453 preload.go:247] saving checksum for preloaded-images-k8s-v11-v1.21.3-docker-overlay2-arm64.tar.lz4 ...
	I0811 00:29:53.796819 1387453 preload.go:254] verifying checksumm of /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-docker-overlay2-arm64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20210811002935-1387367"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.21.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/json-events (10.79s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-20210811002935-1387367 --force --alsologtostderr --kubernetes-version=v1.22.0-rc.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-20210811002935-1387367 --force --alsologtostderr --kubernetes-version=v1.22.0-rc.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (10.789631579s)
--- PASS: TestDownloadOnly/v1.22.0-rc.0/json-events (10.79s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.22.0-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-20210811002935-1387367
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-20210811002935-1387367: exit status 85 (82.830851ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/11 00:29:56
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.16.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0811 00:29:56.471570 1387535 out.go:298] Setting OutFile to fd 1 ...
	I0811 00:29:56.471682 1387535 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0811 00:29:56.471693 1387535 out.go:311] Setting ErrFile to fd 2...
	I0811 00:29:56.471697 1387535 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0811 00:29:56.471848 1387535 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/bin
	W0811 00:29:56.472148 1387535 root.go:291] Error reading config file at /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/config/config.json: no such file or directory
	I0811 00:29:56.472279 1387535 out.go:305] Setting JSON to true
	I0811 00:29:56.473098 1387535 start.go:111] hostinfo: {"hostname":"ip-172-31-31-251","uptime":36743,"bootTime":1628605053,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0811 00:29:56.473179 1387535 start.go:121] virtualization:  
	I0811 00:29:56.477174 1387535 notify.go:169] Checking for updates...
	W0811 00:29:56.480089 1387535 start.go:659] api.Load failed for download-only-20210811002935-1387367: filestore "download-only-20210811002935-1387367": Docker machine "download-only-20210811002935-1387367" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0811 00:29:56.480157 1387535 driver.go:335] Setting default libvirt URI to qemu:///system
	W0811 00:29:56.480183 1387535 start.go:659] api.Load failed for download-only-20210811002935-1387367: filestore "download-only-20210811002935-1387367": Docker machine "download-only-20210811002935-1387367" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0811 00:29:56.515893 1387535 docker.go:132] docker version: linux-20.10.8
	I0811 00:29:56.516013 1387535 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0811 00:29:56.614569 1387535 info.go:263] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:46 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:34 SystemTime:2021-08-11 00:29:56.553860682 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226263040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0811 00:29:56.614682 1387535 docker.go:244] overlay module found
	I0811 00:29:56.617702 1387535 start.go:278] selected driver: docker
	I0811 00:29:56.617722 1387535 start.go:751] validating driver "docker" against &{Name:download-only-20210811002935-1387367 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:download-only-20210811002935-1387367 Namespace:default APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0811 00:29:56.617923 1387535 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0811 00:29:56.698188 1387535 info.go:263] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:46 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:34 SystemTime:2021-08-11 00:29:56.644493255 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226263040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0811 00:29:56.698580 1387535 cni.go:93] Creating CNI manager for ""
	I0811 00:29:56.698600 1387535 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0811 00:29:56.698608 1387535 start_flags.go:277] config:
	{Name:download-only-20210811002935-1387367 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:download-only-20210811002935-1387367 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0811 00:29:56.702111 1387535 cache.go:117] Beginning downloading kic base image for docker with docker
	I0811 00:29:56.706676 1387535 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime docker
	I0811 00:29:56.706761 1387535 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	I0811 00:29:56.755271 1387535 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0811 00:29:56.755298 1387535 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0811 00:29:56.784376 1387535 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.22.0-rc.0-docker-overlay2-arm64.tar.lz4
	I0811 00:29:56.784397 1387535 cache.go:56] Caching tarball of preloaded images
	I0811 00:29:56.784654 1387535 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime docker
	I0811 00:29:56.787564 1387535 preload.go:237] getting checksum for preloaded-images-k8s-v11-v1.22.0-rc.0-docker-overlay2-arm64.tar.lz4 ...
	I0811 00:29:56.904417 1387535 download.go:92] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.22.0-rc.0-docker-overlay2-arm64.tar.lz4?checksum=md5:f16dfe6ac63d4a95c27402a87744c40e -> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-docker-overlay2-arm64.tar.lz4
	I0811 00:30:04.808567 1387535 preload.go:247] saving checksum for preloaded-images-k8s-v11-v1.22.0-rc.0-docker-overlay2-arm64.tar.lz4 ...
	I0811 00:30:04.808702 1387535 preload.go:254] verifying checksumm of /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-docker-overlay2-arm64.tar.lz4 ...
	I0811 00:30:06.101649 1387535 cache.go:59] Finished verifying existence of preloaded tar for  v1.22.0-rc.0 on docker
	I0811 00:30:06.101809 1387535 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/download-only-20210811002935-1387367/config.json ...
	I0811 00:30:06.102022 1387535 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime docker
	I0811 00:30:06.102261 1387535 download.go:92] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.22.0-rc.0/bin/linux/arm64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.22.0-rc.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/cache/linux/v1.22.0-rc.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20210811002935-1387367"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.22.0-rc.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.35s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:189: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.35s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:201: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-20210811002935-1387367
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestOffline (80.85s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-arm64 start -p offline-docker-20210811011523-1387367 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-arm64 start -p offline-docker-20210811011523-1387367 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (1m18.346576824s)
helpers_test.go:176: Cleaning up "offline-docker-20210811011523-1387367" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p offline-docker-20210811011523-1387367
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p offline-docker-20210811011523-1387367: (2.504156284s)
--- PASS: TestOffline (80.85s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (38.9s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:158: (dbg) TestAddons/parallel/Ingress: waiting 12m0s for pods matching "app.kubernetes.io/name=ingress-nginx" in namespace "ingress-nginx" ...
helpers_test.go:340: "ingress-nginx-admission-create-grgf6" [4fa96cd9-e70d-47c9-891f-3bce7bc1de40] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:158: (dbg) TestAddons/parallel/Ingress: app.kubernetes.io/name=ingress-nginx healthy within 3.510924ms
addons_test.go:165: (dbg) Run:  kubectl --context addons-20210811003021-1387367 replace --force -f testdata/nginx-ingv1.yaml
addons_test.go:180: (dbg) Run:  kubectl --context addons-20210811003021-1387367 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:185: (dbg) TestAddons/parallel/Ingress: waiting 4m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:340: "nginx" [98db6029-795c-4df4-9979-18401d56922d] Pending
helpers_test.go:340: "nginx" [98db6029-795c-4df4-9979-18401d56922d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:340: "nginx" [98db6029-795c-4df4-9979-18401d56922d] Running
addons_test.go:185: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.029453775s
addons_test.go:204: (dbg) Run:  out/minikube-linux-arm64 -p addons-20210811003021-1387367 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:165: (dbg) Run:  kubectl --context addons-20210811003021-1387367 replace --force -f testdata/nginx-ingv1.yaml
addons_test.go:242: (dbg) Run:  out/minikube-linux-arm64 -p addons-20210811003021-1387367 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:265: (dbg) Run:  out/minikube-linux-arm64 -p addons-20210811003021-1387367 addons disable ingress --alsologtostderr -v=1
addons_test.go:265: (dbg) Done: out/minikube-linux-arm64 -p addons-20210811003021-1387367 addons disable ingress --alsologtostderr -v=1: (28.739283967s)
--- PASS: TestAddons/parallel/Ingress (38.90s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.72s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:361: metrics-server stabilized in 3.825647ms
addons_test.go:363: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:340: "metrics-server-77c99ccb96-7bz4t" [f135d883-ab80-4dd8-a141-333424152bcb] Running
addons_test.go:363: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.014348817s
addons_test.go:369: (dbg) Run:  kubectl --context addons-20210811003021-1387367 top pods -n kube-system
addons_test.go:374: kubectl --context addons-20210811003021-1387367 top pods -n kube-system: unexpected stderr: W0811 00:36:36.454915 1400894 top_pod.go:140] Using json format to get metrics. Next release will switch to protocol-buffers, switch early by passing --use-protocol-buffers flag
addons_test.go:386: (dbg) Run:  out/minikube-linux-arm64 -p addons-20210811003021-1387367 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.72s)

                                                
                                    
x
+
TestAddons/parallel/CSI (39.94s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:526: csi-hostpath-driver pods stabilized in 7.483328ms
addons_test.go:529: (dbg) Run:  kubectl --context addons-20210811003021-1387367 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:534: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:390: (dbg) Run:  kubectl --context addons-20210811003021-1387367 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:390: (dbg) Run:  kubectl --context addons-20210811003021-1387367 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:539: (dbg) Run:  kubectl --context addons-20210811003021-1387367 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:544: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:340: "task-pv-pod" [764c4b3b-8872-4a29-be47-2e7047398b7f] Pending
helpers_test.go:340: "task-pv-pod" [764c4b3b-8872-4a29-be47-2e7047398b7f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:340: "task-pv-pod" [764c4b3b-8872-4a29-be47-2e7047398b7f] Running
addons_test.go:544: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.012602656s
addons_test.go:549: (dbg) Run:  kubectl --context addons-20210811003021-1387367 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:554: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:415: (dbg) Run:  kubectl --context addons-20210811003021-1387367 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:415: (dbg) Run:  kubectl --context addons-20210811003021-1387367 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:559: (dbg) Run:  kubectl --context addons-20210811003021-1387367 delete pod task-pv-pod
addons_test.go:559: (dbg) Done: kubectl --context addons-20210811003021-1387367 delete pod task-pv-pod: (3.932180525s)
addons_test.go:565: (dbg) Run:  kubectl --context addons-20210811003021-1387367 delete pvc hpvc
addons_test.go:571: (dbg) Run:  kubectl --context addons-20210811003021-1387367 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:576: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:390: (dbg) Run:  kubectl --context addons-20210811003021-1387367 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:581: (dbg) Run:  kubectl --context addons-20210811003021-1387367 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:586: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:340: "task-pv-pod-restore" [632bce94-82c9-41b5-8df2-a5d1af852915] Pending
helpers_test.go:340: "task-pv-pod-restore" [632bce94-82c9-41b5-8df2-a5d1af852915] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:340: "task-pv-pod-restore" [632bce94-82c9-41b5-8df2-a5d1af852915] Running
addons_test.go:586: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 10.019511454s
addons_test.go:591: (dbg) Run:  kubectl --context addons-20210811003021-1387367 delete pod task-pv-pod-restore
addons_test.go:591: (dbg) Done: kubectl --context addons-20210811003021-1387367 delete pod task-pv-pod-restore: (1.917301097s)
addons_test.go:595: (dbg) Run:  kubectl --context addons-20210811003021-1387367 delete pvc hpvc-restore
addons_test.go:599: (dbg) Run:  kubectl --context addons-20210811003021-1387367 delete volumesnapshot new-snapshot-demo
addons_test.go:603: (dbg) Run:  out/minikube-linux-arm64 -p addons-20210811003021-1387367 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:603: (dbg) Done: out/minikube-linux-arm64 -p addons-20210811003021-1387367 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.909962221s)
addons_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p addons-20210811003021-1387367 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (39.94s)

                                                
                                    
x
+
TestAddons/parallel/GCPAuth (15.84s)

                                                
                                                
=== RUN   TestAddons/parallel/GCPAuth
=== PAUSE TestAddons/parallel/GCPAuth

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:618: (dbg) Run:  kubectl --context addons-20210811003021-1387367 create -f testdata/busybox.yaml
addons_test.go:624: (dbg) TestAddons/parallel/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:340: "busybox" [80c0ba99-274b-46a1-a794-a0245cc52637] Pending
helpers_test.go:340: "busybox" [80c0ba99-274b-46a1-a794-a0245cc52637] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:340: "busybox" [80c0ba99-274b-46a1-a794-a0245cc52637] Running
addons_test.go:624: (dbg) TestAddons/parallel/GCPAuth: integration-test=busybox healthy within 9.014587921s
addons_test.go:630: (dbg) Run:  kubectl --context addons-20210811003021-1387367 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:643: (dbg) Run:  kubectl --context addons-20210811003021-1387367 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:667: (dbg) Run:  kubectl --context addons-20210811003021-1387367 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:709: (dbg) Run:  out/minikube-linux-arm64 -p addons-20210811003021-1387367 addons disable gcp-auth --alsologtostderr -v=1
addons_test.go:709: (dbg) Done: out/minikube-linux-arm64 -p addons-20210811003021-1387367 addons disable gcp-auth --alsologtostderr -v=1: (5.87403825s)
--- PASS: TestAddons/parallel/GCPAuth (15.84s)

                                                
                                    
x
+
TestCertOptions (46.09s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:47: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-20210811012019-1387367 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
E0811 01:20:44.126186 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/client.crt: no such file or directory
cert_options_test.go:47: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-20210811012019-1387367 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (43.205492482s)
cert_options_test.go:58: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-20210811012019-1387367 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:73: (dbg) Run:  kubectl --context cert-options-20210811012019-1387367 config view
helpers_test.go:176: Cleaning up "cert-options-20210811012019-1387367" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-20210811012019-1387367
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-20210811012019-1387367: (2.496358878s)
--- PASS: TestCertOptions (46.09s)

                                                
                                    
x
+
TestDockerFlags (45.22s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-linux-arm64 start -p docker-flags-20210811011934-1387367 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:45: (dbg) Done: out/minikube-linux-arm64 start -p docker-flags-20210811011934-1387367 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (42.192640857s)
docker_test.go:50: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-20210811011934-1387367 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:61: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-20210811011934-1387367 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:176: Cleaning up "docker-flags-20210811011934-1387367" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-flags-20210811011934-1387367
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-flags-20210811011934-1387367: (2.437684055s)
--- PASS: TestDockerFlags (45.22s)

                                                
                                    
x
+
TestForceSystemdFlag (47.45s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-20210811011846-1387367 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:85: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-20210811011846-1387367 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (44.578352297s)
docker_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-20210811011846-1387367 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:176: Cleaning up "force-systemd-flag-20210811011846-1387367" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-20210811011846-1387367
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-20210811011846-1387367: (2.481513282s)
--- PASS: TestForceSystemdFlag (47.45s)

                                                
                                    
x
+
TestForceSystemdEnv (47.89s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-20210811011758-1387367 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0811 01:18:04.809081 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210811004603-1387367/client.crt: no such file or directory
docker_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-20210811011758-1387367 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (45.006129812s)
docker_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-20210811011758-1387367 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:176: Cleaning up "force-systemd-env-20210811011758-1387367" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-20210811011758-1387367
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-20210811011758-1387367: (2.479686574s)
--- PASS: TestForceSystemdEnv (47.89s)

                                                
                                    
x
+
TestErrorSpam/setup (44.89s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:78: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-20210811004508-1387367 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20210811004508-1387367 --driver=docker  --container-runtime=docker
error_spam_test.go:78: (dbg) Done: out/minikube-linux-arm64 start -p nospam-20210811004508-1387367 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20210811004508-1387367 --driver=docker  --container-runtime=docker: (44.893369622s)
error_spam_test.go:88: acceptable stderr: "! Your cgroup does not allow setting memory."
--- PASS: TestErrorSpam/setup (44.89s)

                                                
                                    
x
+
TestErrorSpam/start (0.93s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:213: Cleaning up 1 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-arm64 -p nospam-20210811004508-1387367 --log_dir /tmp/nospam-20210811004508-1387367 start --dry-run
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-arm64 -p nospam-20210811004508-1387367 --log_dir /tmp/nospam-20210811004508-1387367 start --dry-run
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-arm64 -p nospam-20210811004508-1387367 --log_dir /tmp/nospam-20210811004508-1387367 start --dry-run
--- PASS: TestErrorSpam/start (0.93s)

                                                
                                    
x
+
TestErrorSpam/status (1.1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-arm64 -p nospam-20210811004508-1387367 --log_dir /tmp/nospam-20210811004508-1387367 status
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-arm64 -p nospam-20210811004508-1387367 --log_dir /tmp/nospam-20210811004508-1387367 status
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-arm64 -p nospam-20210811004508-1387367 --log_dir /tmp/nospam-20210811004508-1387367 status
--- PASS: TestErrorSpam/status (1.10s)

                                                
                                    
x
+
TestErrorSpam/pause (1.33s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-arm64 -p nospam-20210811004508-1387367 --log_dir /tmp/nospam-20210811004508-1387367 pause
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-arm64 -p nospam-20210811004508-1387367 --log_dir /tmp/nospam-20210811004508-1387367 pause
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-arm64 -p nospam-20210811004508-1387367 --log_dir /tmp/nospam-20210811004508-1387367 pause
--- PASS: TestErrorSpam/pause (1.33s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.35s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-arm64 -p nospam-20210811004508-1387367 --log_dir /tmp/nospam-20210811004508-1387367 unpause
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-arm64 -p nospam-20210811004508-1387367 --log_dir /tmp/nospam-20210811004508-1387367 unpause
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-arm64 -p nospam-20210811004508-1387367 --log_dir /tmp/nospam-20210811004508-1387367 unpause
--- PASS: TestErrorSpam/unpause (1.35s)

                                                
                                    
x
+
TestErrorSpam/stop (3.69s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-arm64 -p nospam-20210811004508-1387367 --log_dir /tmp/nospam-20210811004508-1387367 stop
error_spam_test.go:156: (dbg) Done: out/minikube-linux-arm64 -p nospam-20210811004508-1387367 --log_dir /tmp/nospam-20210811004508-1387367 stop: (3.411210204s)
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-arm64 -p nospam-20210811004508-1387367 --log_dir /tmp/nospam-20210811004508-1387367 stop
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-arm64 -p nospam-20210811004508-1387367 --log_dir /tmp/nospam-20210811004508-1387367 stop
--- PASS: TestErrorSpam/stop (3.69s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1606: local sync path: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/files/etc/test/nested/copy/1387367/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (67.54s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:1982: (dbg) Run:  out/minikube-linux-arm64 start -p functional-20210811004603-1387367 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:1982: (dbg) Done: out/minikube-linux-arm64 start -p functional-20210811004603-1387367 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (1m7.541867466s)
--- PASS: TestFunctional/serial/StartWithProxy (67.54s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.65s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:627: (dbg) Run:  out/minikube-linux-arm64 start -p functional-20210811004603-1387367 --alsologtostderr -v=8
functional_test.go:627: (dbg) Done: out/minikube-linux-arm64 start -p functional-20210811004603-1387367 --alsologtostderr -v=8: (6.650316694s)
functional_test.go:631: soft start took 6.65076465s for "functional-20210811004603-1387367" cluster.
--- PASS: TestFunctional/serial/SoftStart (6.65s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:647: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.35s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:660: (dbg) Run:  kubectl --context functional-20210811004603-1387367 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.35s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (5.99s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:982: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210811004603-1387367 cache add k8s.gcr.io/pause:3.1
functional_test.go:982: (dbg) Done: out/minikube-linux-arm64 -p functional-20210811004603-1387367 cache add k8s.gcr.io/pause:3.1: (2.195313175s)
functional_test.go:982: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210811004603-1387367 cache add k8s.gcr.io/pause:3.3
functional_test.go:982: (dbg) Done: out/minikube-linux-arm64 -p functional-20210811004603-1387367 cache add k8s.gcr.io/pause:3.3: (1.954588935s)
functional_test.go:982: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210811004603-1387367 cache add k8s.gcr.io/pause:latest
functional_test.go:982: (dbg) Done: out/minikube-linux-arm64 -p functional-20210811004603-1387367 cache add k8s.gcr.io/pause:latest: (1.842590972s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (5.99s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1012: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20210811004603-1387367 /tmp/functional-20210811004603-1387367350132399
functional_test.go:1024: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210811004603-1387367 cache add minikube-local-cache-test:functional-20210811004603-1387367
functional_test.go:1029: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210811004603-1387367 cache delete minikube-local-cache-test:functional-20210811004603-1387367
functional_test.go:1018: (dbg) Run:  docker rmi minikube-local-cache-test:functional-20210811004603-1387367
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1036: (dbg) Run:  out/minikube-linux-arm64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1043: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1056: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210811004603-1387367 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1078: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210811004603-1387367 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1084: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210811004603-1387367 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1084: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-20210811004603-1387367 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (286.558497ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210811004603-1387367 cache reload
functional_test.go:1089: (dbg) Done: out/minikube-linux-arm64 -p functional-20210811004603-1387367 cache reload: (1.332291836s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210811004603-1387367 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1103: (dbg) Run:  out/minikube-linux-arm64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1103: (dbg) Run:  out/minikube-linux-arm64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:678: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210811004603-1387367 kubectl -- --context functional-20210811004603-1387367 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:701: (dbg) Run:  out/kubectl --context functional-20210811004603-1387367 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (32.54s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:715: (dbg) Run:  out/minikube-linux-arm64 start -p functional-20210811004603-1387367 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0811 00:47:41.079795 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/client.crt: no such file or directory
E0811 00:47:41.086521 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/client.crt: no such file or directory
E0811 00:47:41.096761 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/client.crt: no such file or directory
E0811 00:47:41.117044 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/client.crt: no such file or directory
E0811 00:47:41.157324 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/client.crt: no such file or directory
E0811 00:47:41.237625 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/client.crt: no such file or directory
E0811 00:47:41.398009 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/client.crt: no such file or directory
E0811 00:47:41.718398 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/client.crt: no such file or directory
E0811 00:47:42.359154 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/client.crt: no such file or directory
E0811 00:47:43.639352 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/client.crt: no such file or directory
E0811 00:47:46.199947 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/client.crt: no such file or directory
E0811 00:47:51.320901 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/client.crt: no such file or directory
functional_test.go:715: (dbg) Done: out/minikube-linux-arm64 start -p functional-20210811004603-1387367 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (32.537914152s)
functional_test.go:719: restart took 32.538024215s for "functional-20210811004603-1387367" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (32.54s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:766: (dbg) Run:  kubectl --context functional-20210811004603-1387367 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:780: etcd phase: Running
functional_test.go:790: etcd status: Ready
functional_test.go:780: kube-apiserver phase: Running
functional_test.go:790: kube-apiserver status: Ready
functional_test.go:780: kube-controller-manager phase: Running
functional_test.go:790: kube-controller-manager status: Ready
functional_test.go:780: kube-scheduler phase: Running
functional_test.go:790: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.65s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1165: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210811004603-1387367 logs
E0811 00:48:01.562037 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/client.crt: no such file or directory
functional_test.go:1165: (dbg) Done: out/minikube-linux-arm64 -p functional-20210811004603-1387367 logs: (1.653483107s)
--- PASS: TestFunctional/serial/LogsCmd (1.65s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.65s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1181: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210811004603-1387367 logs --file /tmp/functional-20210811004603-1387367227120194/logs.txt
functional_test.go:1181: (dbg) Done: out/minikube-linux-arm64 -p functional-20210811004603-1387367 logs --file /tmp/functional-20210811004603-1387367227120194/logs.txt: (1.653454322s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.65s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1129: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210811004603-1387367 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1129: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210811004603-1387367 config get cpus
functional_test.go:1129: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-20210811004603-1387367 config get cpus: exit status 14 (96.430484ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1129: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210811004603-1387367 config set cpus 2
functional_test.go:1129: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210811004603-1387367 config get cpus
functional_test.go:1129: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210811004603-1387367 config unset cpus
functional_test.go:1129: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210811004603-1387367 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1129: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-20210811004603-1387367 config get cpus: exit status 14 (101.691277ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (5.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:857: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-20210811004603-1387367 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:862: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-20210811004603-1387367 --alsologtostderr -v=1] ...

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
helpers_test.go:504: unable to kill pid 1420798: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (5.38s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:919: (dbg) Run:  out/minikube-linux-arm64 start -p functional-20210811004603-1387367 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:919: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-20210811004603-1387367 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (245.014258ms)

                                                
                                                
-- stdout --
	* [functional-20210811004603-1387367] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=12230
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	* Using the docker driver based on existing profile
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0811 00:48:31.882919 1420494 out.go:298] Setting OutFile to fd 1 ...
	I0811 00:48:31.883451 1420494 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0811 00:48:31.883492 1420494 out.go:311] Setting ErrFile to fd 2...
	I0811 00:48:31.883510 1420494 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0811 00:48:31.883759 1420494 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/bin
	I0811 00:48:31.884191 1420494 out.go:305] Setting JSON to false
	I0811 00:48:31.885671 1420494 start.go:111] hostinfo: {"hostname":"ip-172-31-31-251","uptime":37859,"bootTime":1628605053,"procs":258,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0811 00:48:31.885863 1420494 start.go:121] virtualization:  
	I0811 00:48:31.890224 1420494 out.go:177] * [functional-20210811004603-1387367] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	I0811 00:48:31.892773 1420494 out.go:177]   - MINIKUBE_LOCATION=12230
	I0811 00:48:31.895110 1420494 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	I0811 00:48:31.897656 1420494 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube
	I0811 00:48:31.901509 1420494 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0811 00:48:31.902573 1420494 driver.go:335] Setting default libvirt URI to qemu:///system
	I0811 00:48:31.951157 1420494 docker.go:132] docker version: linux-20.10.8
	I0811 00:48:31.951259 1420494 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0811 00:48:32.045545 1420494 info.go:263] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:46 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:39 SystemTime:2021-08-11 00:48:31.986324034 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226263040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0811 00:48:32.045655 1420494 docker.go:244] overlay module found
	I0811 00:48:32.048913 1420494 out.go:177] * Using the docker driver based on existing profile
	I0811 00:48:32.048934 1420494 start.go:278] selected driver: docker
	I0811 00:48:32.049000 1420494 start.go:751] validating driver "docker" against &{Name:functional-20210811004603-1387367 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:functional-20210811004603-1387367 Namespace:default APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage
-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0811 00:48:32.049184 1420494 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0811 00:48:32.049222 1420494 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0811 00:48:32.049239 1420494 out.go:242] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0811 00:48:32.052505 1420494 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0811 00:48:32.055647 1420494 out.go:177] 
	W0811 00:48:32.055762 1420494 out.go:242] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0811 00:48:32.057957 1420494 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:934: (dbg) Run:  out/minikube-linux-arm64 start -p functional-20210811004603-1387367 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:956: (dbg) Run:  out/minikube-linux-arm64 start -p functional-20210811004603-1387367 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:956: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-20210811004603-1387367 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (233.593018ms)

                                                
                                                
-- stdout --
	* [functional-20210811004603-1387367] minikube v1.22.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=12230
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	* Utilisation du pilote docker basé sur le profil existant
	  - Plus d'informations: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0811 00:48:31.654801 1420451 out.go:298] Setting OutFile to fd 1 ...
	I0811 00:48:31.654921 1420451 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0811 00:48:31.654931 1420451 out.go:311] Setting ErrFile to fd 2...
	I0811 00:48:31.654936 1420451 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0811 00:48:31.655130 1420451 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/bin
	I0811 00:48:31.655372 1420451 out.go:305] Setting JSON to false
	I0811 00:48:31.656353 1420451 start.go:111] hostinfo: {"hostname":"ip-172-31-31-251","uptime":37859,"bootTime":1628605053,"procs":258,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0811 00:48:31.656439 1420451 start.go:121] virtualization:  
	I0811 00:48:31.659517 1420451 out.go:177] * [functional-20210811004603-1387367] minikube v1.22.0 sur Ubuntu 20.04 (arm64)
	I0811 00:48:31.662899 1420451 out.go:177]   - MINIKUBE_LOCATION=12230
	I0811 00:48:31.664954 1420451 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	I0811 00:48:31.667112 1420451 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube
	I0811 00:48:31.669056 1420451 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0811 00:48:31.670023 1420451 driver.go:335] Setting default libvirt URI to qemu:///system
	I0811 00:48:31.712164 1420451 docker.go:132] docker version: linux-20.10.8
	I0811 00:48:31.712267 1420451 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0811 00:48:31.799540 1420451 info.go:263] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:46 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:39 SystemTime:2021-08-11 00:48:31.742300039 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226263040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0811 00:48:31.799651 1420451 docker.go:244] overlay module found
	I0811 00:48:31.803952 1420451 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0811 00:48:31.804003 1420451 start.go:278] selected driver: docker
	I0811 00:48:31.804012 1420451 start.go:751] validating driver "docker" against &{Name:functional-20210811004603-1387367 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:functional-20210811004603-1387367 Namespace:default APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage
-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0811 00:48:31.804149 1420451 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0811 00:48:31.804193 1420451 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0811 00:48:31.804212 1420451 out.go:242] ! Votre groupe de contrôle ne permet pas de définir la mémoire.
	! Votre groupe de contrôle ne permet pas de définir la mémoire.
	I0811 00:48:31.806883 1420451 out.go:177]   - Plus d'informations: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0811 00:48:31.810049 1420451 out.go:177] 
	W0811 00:48:31.810189 1420451 out.go:242] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0811 00:48:31.812762 1420451 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:809: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210811004603-1387367 status

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:815: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210811004603-1387367 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:826: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210811004603-1387367 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (12.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1355: (dbg) Run:  kubectl --context functional-20210811004603-1387367 create deployment hello-node --image=k8s.gcr.io/echoserver-arm:1.8
functional_test.go:1363: (dbg) Run:  kubectl --context functional-20210811004603-1387367 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1368: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:340: "hello-node-6d98884d59-6q29d" [e97a6677-bf6e-4572-ae46-2e901fbd861c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:340: "hello-node-6d98884d59-6q29d" [e97a6677-bf6e-4572-ae46-2e901fbd861c] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1368: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 11.013152997s
functional_test.go:1372: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210811004603-1387367 service list
functional_test.go:1385: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210811004603-1387367 service --namespace=default --https --url hello-node
functional_test.go:1394: found endpoint: https://192.168.49.2:31351
functional_test.go:1405: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210811004603-1387367 service hello-node --url --format={{.IP}}
functional_test.go:1414: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210811004603-1387367 service hello-node --url
functional_test.go:1420: found endpoint for hello-node: http://192.168.49.2:31351
functional_test.go:1431: Attempting to fetch http://192.168.49.2:31351 ...
functional_test.go:1450: http://192.168.49.2:31351: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-6d98884d59-6q29d

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=172.17.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31351
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmd (12.73s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1465: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210811004603-1387367 addons list
functional_test.go:1476: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210811004603-1387367 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:340: "storage-provisioner" [f38e8e4e-de83-493d-a2e2-aee3c18021a0] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.014941072s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-20210811004603-1387367 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-20210811004603-1387367 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20210811004603-1387367 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20210811004603-1387367 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:340: "sp-pod" [e74b13bc-5b92-4e0e-acb9-bb025048a6ae] Pending
helpers_test.go:340: "sp-pod" [e74b13bc-5b92-4e0e-acb9-bb025048a6ae] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:340: "sp-pod" [e74b13bc-5b92-4e0e-acb9-bb025048a6ae] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.0169172s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-20210811004603-1387367 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-20210811004603-1387367 delete -f testdata/storage-provisioner/pod.yaml
E0811 00:48:22.042260 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/client.crt: no such file or directory
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-20210811004603-1387367 delete -f testdata/storage-provisioner/pod.yaml: (1.668182974s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20210811004603-1387367 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:340: "sp-pod" [9efa934a-6caf-4cac-8127-d5c89298a970] Pending
helpers_test.go:340: "sp-pod" [9efa934a-6caf-4cac-8127-d5c89298a970] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:340: "sp-pod" [9efa934a-6caf-4cac-8127-d5c89298a970] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.011717034s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-20210811004603-1387367 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.08s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1498: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210811004603-1387367 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1515: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210811004603-1387367 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:532: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210811004603-1387367 cp testdata/cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:546: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210811004603-1387367 ssh "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1678: Checking for existence of /etc/test/nested/copy/1387367/hosts within VM
functional_test.go:1679: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210811004603-1387367 ssh "sudo cat /etc/test/nested/copy/1387367/hosts"

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1684: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1719: Checking for existence of /etc/ssl/certs/1387367.pem within VM
functional_test.go:1720: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210811004603-1387367 ssh "sudo cat /etc/ssl/certs/1387367.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1719: Checking for existence of /usr/share/ca-certificates/1387367.pem within VM
functional_test.go:1720: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210811004603-1387367 ssh "sudo cat /usr/share/ca-certificates/1387367.pem"
functional_test.go:1719: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1720: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210811004603-1387367 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1746: Checking for existence of /etc/ssl/certs/13873672.pem within VM
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210811004603-1387367 ssh "sudo cat /etc/ssl/certs/13873672.pem"
functional_test.go:1746: Checking for existence of /usr/share/ca-certificates/13873672.pem within VM
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210811004603-1387367 ssh "sudo cat /usr/share/ca-certificates/13873672.pem"
functional_test.go:1746: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210811004603-1387367 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.91s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:480: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-20210811004603-1387367 docker-env) && out/minikube-linux-arm64 status -p functional-20210811004603-1387367"

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:503: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-20210811004603-1387367 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:216: (dbg) Run:  kubectl --context functional-20210811004603-1387367 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/LoadImage (1.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/LoadImage
=== PAUSE TestFunctional/parallel/LoadImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:239: (dbg) Run:  docker pull busybox:1.33

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:246: (dbg) Run:  docker tag busybox:1.33 docker.io/library/busybox:load-functional-20210811004603-1387367
functional_test.go:252: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210811004603-1387367 image load docker.io/library/busybox:load-functional-20210811004603-1387367

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:373: (dbg) Run:  out/minikube-linux-arm64 ssh -p functional-20210811004603-1387367 -- docker image inspect docker.io/library/busybox:load-functional-20210811004603-1387367
--- PASS: TestFunctional/parallel/LoadImage (1.50s)

                                                
                                    
x
+
TestFunctional/parallel/RemoveImage (1.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/RemoveImage
=== PAUSE TestFunctional/parallel/RemoveImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:331: (dbg) Run:  docker pull busybox:1.32
functional_test.go:338: (dbg) Run:  docker tag busybox:1.32 docker.io/library/busybox:remove-functional-20210811004603-1387367
functional_test.go:344: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210811004603-1387367 image load docker.io/library/busybox:remove-functional-20210811004603-1387367

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:350: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210811004603-1387367 image rm docker.io/library/busybox:remove-functional-20210811004603-1387367
functional_test.go:387: (dbg) Run:  out/minikube-linux-arm64 ssh -p functional-20210811004603-1387367 -- docker images
--- PASS: TestFunctional/parallel/RemoveImage (1.97s)

                                                
                                    
x
+
TestFunctional/parallel/LoadImageFromFile (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/LoadImageFromFile
=== PAUSE TestFunctional/parallel/LoadImageFromFile

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImageFromFile
functional_test.go:279: (dbg) Run:  docker pull busybox:1.31
functional_test.go:286: (dbg) Run:  docker tag busybox:1.31 docker.io/library/busybox:load-from-file-functional-20210811004603-1387367
functional_test.go:293: (dbg) Run:  docker save -o busybox.tar docker.io/library/busybox:load-from-file-functional-20210811004603-1387367
functional_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210811004603-1387367 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/busybox.tar
functional_test.go:387: (dbg) Run:  out/minikube-linux-arm64 ssh -p functional-20210811004603-1387367 -- docker images
--- PASS: TestFunctional/parallel/LoadImageFromFile (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/BuildImage (2.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/BuildImage
=== PAUSE TestFunctional/parallel/BuildImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/BuildImage
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210811004603-1387367 image build -t localhost/my-image:functional-20210811004603-1387367 testdata/build

                                                
                                                
=== CONT  TestFunctional/parallel/BuildImage
functional_test.go:407: (dbg) Done: out/minikube-linux-arm64 -p functional-20210811004603-1387367 image build -t localhost/my-image:functional-20210811004603-1387367 testdata/build: (2.299701205s)
functional_test.go:412: (dbg) Stdout: out/minikube-linux-arm64 -p functional-20210811004603-1387367 image build -t localhost/my-image:functional-20210811004603-1387367 testdata/build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM busybox
latest: Pulling from library/busybox
38cc3b49dbab: Pulling fs layer
38cc3b49dbab: Verifying Checksum
38cc3b49dbab: Download complete
38cc3b49dbab: Pull complete
Digest: sha256:0f354ec1728d9ff32edcd7d1b8bbdfc798277ad36120dc3dc683be44524c8b60
Status: Downloaded newer image for busybox:latest
---> 90441bfaac70
Step 2/3 : RUN true
---> Running in d4e5f5b11c96
Removing intermediate container d4e5f5b11c96
---> 77f80fca551a
Step 3/3 : ADD content.txt /
---> a95a68e376be
Successfully built a95a68e376be
Successfully tagged localhost/my-image:functional-20210811004603-1387367
functional_test.go:373: (dbg) Run:  out/minikube-linux-arm64 ssh -p functional-20210811004603-1387367 -- docker image inspect localhost/my-image:functional-20210811004603-1387367
--- PASS: TestFunctional/parallel/BuildImage (2.68s)

                                                
                                    
x
+
TestFunctional/parallel/ListImages (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ListImages
=== PAUSE TestFunctional/parallel/ListImages

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ListImages
functional_test.go:441: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210811004603-1387367 image ls
2021/08/11 00:48:37 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/

                                                
                                                
=== CONT  TestFunctional/parallel/ListImages
functional_test.go:446: (dbg) Stdout: out/minikube-linux-arm64 -p functional-20210811004603-1387367 image ls:
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.4.1
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/kube-scheduler:v1.21.3
k8s.gcr.io/kube-proxy:v1.21.3
k8s.gcr.io/kube-controller-manager:v1.21.3
k8s.gcr.io/kube-apiserver:v1.21.3
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/echoserver-arm:1.8
k8s.gcr.io/coredns/coredns:v1.8.0
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-20210811004603-1387367
docker.io/library/busybox:1.28.4-glibc
docker.io/kubernetesui/metrics-scraper:v1.0.4
docker.io/kubernetesui/dashboard:v2.1.0
--- PASS: TestFunctional/parallel/ListImages (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1774: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210811004603-1387367 ssh "sudo systemctl is-active crio"
functional_test.go:1774: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-20210811004603-1387367 ssh "sudo systemctl is-active crio": exit status 1 (275.074392ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2003: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210811004603-1387367 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2016: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210811004603-1387367 version -o=json --components

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2016: (dbg) Done: out/minikube-linux-arm64 -p functional-20210811004603-1387367 version -o=json --components: (1.451388379s)
--- PASS: TestFunctional/parallel/Version/components (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:126: (dbg) daemon: [out/minikube-linux-arm64 -p functional-20210811004603-1387367 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:164: (dbg) Run:  kubectl --context functional-20210811004603-1387367 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:229: tunnel at http://10.105.156.216 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:364: (dbg) stopping [out/minikube-linux-arm64 -p functional-20210811004603-1387367 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1202: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1206: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1240: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1245: Took "323.058621ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1254: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1259: Took "65.046919ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1295: Took "310.469495ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1303: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1308: Took "69.724987ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:76: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-20210811004603-1387367 /tmp/mounttest146995385:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:110: wrote "test-1628642908763486021" to /tmp/mounttest146995385/created-by-test
functional_test_mount_test.go:110: wrote "test-1628642908763486021" to /tmp/mounttest146995385/created-by-test-removed-by-pod
functional_test_mount_test.go:110: wrote "test-1628642908763486021" to /tmp/mounttest146995385/test-1628642908763486021
functional_test_mount_test.go:118: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210811004603-1387367 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:118: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-20210811004603-1387367 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (313.877309ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:118: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210811004603-1387367 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210811004603-1387367 ssh -- ls -la /mount-9p
functional_test_mount_test.go:136: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 11 00:48 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 11 00:48 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 11 00:48 test-1628642908763486021
functional_test_mount_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210811004603-1387367 ssh cat /mount-9p/test-1628642908763486021

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:151: (dbg) Run:  kubectl --context functional-20210811004603-1387367 replace --force -f testdata/busybox-mount-test.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:156: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:340: "busybox-mount" [7916c466-f7c7-4f4c-ab00-8ff51aa16415] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:340: "busybox-mount" [7916c466-f7c7-4f4c-ab00-8ff51aa16415] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:340: "busybox-mount" [7916c466-f7c7-4f4c-ab00-8ff51aa16415] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:156: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 3.017857191s
functional_test_mount_test.go:172: (dbg) Run:  kubectl --context functional-20210811004603-1387367 logs busybox-mount
functional_test_mount_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210811004603-1387367 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210811004603-1387367 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:93: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210811004603-1387367 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:97: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-20210811004603-1387367 /tmp/mounttest146995385:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.09s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:225: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-20210811004603-1387367 /tmp/mounttest184236740:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210811004603-1387367 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:255: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-20210811004603-1387367 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (371.027248ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210811004603-1387367 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:269: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210811004603-1387367 ssh -- ls -la /mount-9p
functional_test_mount_test.go:273: guest mount directory contents
total 0
functional_test_mount_test.go:275: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-20210811004603-1387367 /tmp/mounttest184236740:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:276: reading mount text
functional_test_mount_test.go:290: done reading mount text
functional_test_mount_test.go:242: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210811004603-1387367 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:242: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-20210811004603-1387367 ssh "sudo umount -f /mount-9p": exit status 1 (266.579673ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:244: "out/minikube-linux-arm64 -p functional-20210811004603-1387367 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:246: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-20210811004603-1387367 /tmp/mounttest184236740:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:1865: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210811004603-1387367 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:1865: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210811004603-1387367 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:1865: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210811004603-1387367 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctional/delete_busybox_image (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_busybox_image
functional_test.go:183: (dbg) Run:  docker rmi -f docker.io/library/busybox:load-functional-20210811004603-1387367
functional_test.go:188: (dbg) Run:  docker rmi -f docker.io/library/busybox:remove-functional-20210811004603-1387367
--- PASS: TestFunctional/delete_busybox_image (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:195: (dbg) Run:  docker rmi -f localhost/my-image:functional-20210811004603-1387367
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:203: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20210811004603-1387367
--- PASS: TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.3s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:146: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-20210811005045-1387367 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:146: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-20210811005045-1387367 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (72.125343ms)

                                                
                                                
-- stdout --
	{"data":{"currentstep":"0","message":"[json-output-error-20210811005045-1387367] minikube v1.22.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"},"datacontenttype":"application/json","id":"ce2f3372-b048-472f-8627-a99020d62d06","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"message":"MINIKUBE_LOCATION=12230"},"datacontenttype":"application/json","id":"1e4626a4-d5fb-44dc-a475-6f856f8adef3","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig"},"datacontenttype":"application/json","id":"d9776286-4ae7-4192-9caf-48b4eccbae20","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube"},"datacontenttype":"application/json","id":"370e95c6-0cc4-440e-bbe7-a46efedbec5f","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"},"datacontenttype":"application/json","id":"e8dc49dc-ff8f-41f4-9c76-8e69930170ed","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""},"datacontenttype":"application/json","id":"78ecfa03-f097-4b47-8253-a5d45fa5f27e","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.error"}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-20210811005045-1387367" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-20210811005045-1387367
--- PASS: TestErrorJSONOutput (0.30s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (44.55s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-20210811005046-1387367 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-20210811005046-1387367 --network=: (42.162783964s)
kic_custom_network_test.go:101: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-20210811005046-1387367" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-20210811005046-1387367
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-20210811005046-1387367: (2.349655375s)
--- PASS: TestKicCustomNetwork/create_custom_network (44.55s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (48.44s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-20210811005130-1387367 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-20210811005130-1387367 --network=bridge: (46.217054122s)
kic_custom_network_test.go:101: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-20210811005130-1387367" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-20210811005130-1387367
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-20210811005130-1387367: (2.183410192s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (48.44s)

                                                
                                    
x
+
TestKicExistingNetwork (48.03s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:101: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-20210811005219-1387367 --network=existing-network
E0811 00:52:41.079800 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-20210811005219-1387367 --network=existing-network: (45.475964531s)
helpers_test.go:176: Cleaning up "existing-network-20210811005219-1387367" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-20210811005219-1387367
E0811 00:53:04.808809 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210811004603-1387367/client.crt: no such file or directory
E0811 00:53:04.814174 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210811004603-1387367/client.crt: no such file or directory
E0811 00:53:04.824414 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210811004603-1387367/client.crt: no such file or directory
E0811 00:53:04.844742 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210811004603-1387367/client.crt: no such file or directory
E0811 00:53:04.885298 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210811004603-1387367/client.crt: no such file or directory
E0811 00:53:04.965952 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210811004603-1387367/client.crt: no such file or directory
E0811 00:53:05.126193 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210811004603-1387367/client.crt: no such file or directory
E0811 00:53:05.446367 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210811004603-1387367/client.crt: no such file or directory
E0811 00:53:06.087237 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210811004603-1387367/client.crt: no such file or directory
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-20210811005219-1387367: (2.324972792s)
--- PASS: TestKicExistingNetwork (48.03s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (115.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-20210811005307-1387367 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0811 00:53:07.367890 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210811004603-1387367/client.crt: no such file or directory
E0811 00:53:08.764327 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/client.crt: no such file or directory
E0811 00:53:09.928780 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210811004603-1387367/client.crt: no such file or directory
E0811 00:53:15.049297 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210811004603-1387367/client.crt: no such file or directory
E0811 00:53:25.290172 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210811004603-1387367/client.crt: no such file or directory
E0811 00:53:45.770882 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210811004603-1387367/client.crt: no such file or directory
E0811 00:54:26.731054 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210811004603-1387367/client.crt: no such file or directory
multinode_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p multinode-20210811005307-1387367 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m54.605697879s)
multinode_test.go:87: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210811005307-1387367 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (115.20s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (43.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:106: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-20210811005307-1387367 -v 3 --alsologtostderr
multinode_test.go:106: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-20210811005307-1387367 -v 3 --alsologtostderr: (43.051987148s)
multinode_test.go:112: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210811005307-1387367 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (43.78s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:128: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.31s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (2.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:169: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210811005307-1387367 status --output json --alsologtostderr
helpers_test.go:532: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210811005307-1387367 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:546: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210811005307-1387367 ssh "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210811005307-1387367 cp testdata/cp-test.txt multinode-20210811005307-1387367-m02:/home/docker/cp-test.txt
helpers_test.go:546: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210811005307-1387367 ssh -n multinode-20210811005307-1387367-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210811005307-1387367 cp testdata/cp-test.txt multinode-20210811005307-1387367-m03:/home/docker/cp-test.txt
helpers_test.go:546: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210811005307-1387367 ssh -n multinode-20210811005307-1387367-m03 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestMultiNode/serial/CopyFile (2.46s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:191: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210811005307-1387367 node stop m03
multinode_test.go:191: (dbg) Done: out/minikube-linux-arm64 -p multinode-20210811005307-1387367 node stop m03: (1.293888687s)
multinode_test.go:197: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210811005307-1387367 status
multinode_test.go:197: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-20210811005307-1387367 status: exit status 7 (553.784775ms)

                                                
                                                
-- stdout --
	multinode-20210811005307-1387367
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20210811005307-1387367-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20210811005307-1387367-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:204: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210811005307-1387367 status --alsologtostderr
multinode_test.go:204: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-20210811005307-1387367 status --alsologtostderr: exit status 7 (576.230491ms)

                                                
                                                
-- stdout --
	multinode-20210811005307-1387367
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20210811005307-1387367-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20210811005307-1387367-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0811 01:05:59.243123 1459585 out.go:298] Setting OutFile to fd 1 ...
	I0811 01:05:59.243304 1459585 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0811 01:05:59.243315 1459585 out.go:311] Setting ErrFile to fd 2...
	I0811 01:05:59.243319 1459585 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0811 01:05:59.243460 1459585 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/bin
	I0811 01:05:59.243663 1459585 out.go:305] Setting JSON to false
	I0811 01:05:59.243694 1459585 mustload.go:65] Loading cluster: multinode-20210811005307-1387367
	I0811 01:05:59.244063 1459585 status.go:253] checking status of multinode-20210811005307-1387367 ...
	I0811 01:05:59.244549 1459585 cli_runner.go:115] Run: docker container inspect multinode-20210811005307-1387367 --format={{.State.Status}}
	I0811 01:05:59.284981 1459585 status.go:328] multinode-20210811005307-1387367 host status = "Running" (err=<nil>)
	I0811 01:05:59.285034 1459585 host.go:66] Checking if "multinode-20210811005307-1387367" exists ...
	I0811 01:05:59.285355 1459585 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210811005307-1387367
	I0811 01:05:59.320819 1459585 host.go:66] Checking if "multinode-20210811005307-1387367" exists ...
	I0811 01:05:59.321189 1459585 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0811 01:05:59.321247 1459585 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210811005307-1387367
	I0811 01:05:59.355368 1459585 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50285 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210811005307-1387367/id_rsa Username:docker}
	I0811 01:05:59.465706 1459585 ssh_runner.go:149] Run: systemctl --version
	I0811 01:05:59.469449 1459585 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0811 01:05:59.480108 1459585 kubeconfig.go:93] found "multinode-20210811005307-1387367" server: "https://192.168.49.2:8443"
	I0811 01:05:59.480141 1459585 api_server.go:164] Checking apiserver status ...
	I0811 01:05:59.480182 1459585 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0811 01:05:59.494080 1459585 ssh_runner.go:149] Run: sudo egrep ^[0-9]+:freezer: /proc/1962/cgroup
	I0811 01:05:59.501401 1459585 api_server.go:180] apiserver freezer: "8:freezer:/docker/549bdb3bf1ad8cdc8e3637fd16790e323f3f10545c1a724499bbf68b3777220a/kubepods/burstable/pod74969952953b6d01bc2817560a3e688d/bfe8629569ccbc8f4003a3e6c7bb1f48acf00669b3085f7d75dd4f19999dd04f"
	I0811 01:05:59.501475 1459585 ssh_runner.go:149] Run: sudo cat /sys/fs/cgroup/freezer/docker/549bdb3bf1ad8cdc8e3637fd16790e323f3f10545c1a724499bbf68b3777220a/kubepods/burstable/pod74969952953b6d01bc2817560a3e688d/bfe8629569ccbc8f4003a3e6c7bb1f48acf00669b3085f7d75dd4f19999dd04f/freezer.state
	I0811 01:05:59.508201 1459585 api_server.go:202] freezer state: "THAWED"
	I0811 01:05:59.508241 1459585 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0811 01:05:59.517667 1459585 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0811 01:05:59.517696 1459585 status.go:419] multinode-20210811005307-1387367 apiserver status = Running (err=<nil>)
	I0811 01:05:59.517707 1459585 status.go:255] multinode-20210811005307-1387367 status: &{Name:multinode-20210811005307-1387367 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0811 01:05:59.517727 1459585 status.go:253] checking status of multinode-20210811005307-1387367-m02 ...
	I0811 01:05:59.518048 1459585 cli_runner.go:115] Run: docker container inspect multinode-20210811005307-1387367-m02 --format={{.State.Status}}
	I0811 01:05:59.551405 1459585 status.go:328] multinode-20210811005307-1387367-m02 host status = "Running" (err=<nil>)
	I0811 01:05:59.551433 1459585 host.go:66] Checking if "multinode-20210811005307-1387367-m02" exists ...
	I0811 01:05:59.551754 1459585 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210811005307-1387367-m02
	I0811 01:05:59.585927 1459585 host.go:66] Checking if "multinode-20210811005307-1387367-m02" exists ...
	I0811 01:05:59.586254 1459585 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0811 01:05:59.586292 1459585 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210811005307-1387367-m02
	I0811 01:05:59.623451 1459585 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50290 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/machines/multinode-20210811005307-1387367-m02/id_rsa Username:docker}
	I0811 01:05:59.709444 1459585 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0811 01:05:59.718916 1459585 status.go:255] multinode-20210811005307-1387367-m02 status: &{Name:multinode-20210811005307-1387367-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0811 01:05:59.718949 1459585 status.go:253] checking status of multinode-20210811005307-1387367-m03 ...
	I0811 01:05:59.719277 1459585 cli_runner.go:115] Run: docker container inspect multinode-20210811005307-1387367-m03 --format={{.State.Status}}
	I0811 01:05:59.752360 1459585 status.go:328] multinode-20210811005307-1387367-m03 host status = "Stopped" (err=<nil>)
	I0811 01:05:59.752380 1459585 status.go:341] host is not running, skipping remaining checks
	I0811 01:05:59.752385 1459585 status.go:255] multinode-20210811005307-1387367-m03 status: &{Name:multinode-20210811005307-1387367-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.42s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (25.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:225: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:235: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210811005307-1387367 node start m03 --alsologtostderr
multinode_test.go:235: (dbg) Done: out/minikube-linux-arm64 -p multinode-20210811005307-1387367 node start m03 --alsologtostderr: (24.392400721s)
multinode_test.go:242: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210811005307-1387367 status
multinode_test.go:256: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (25.27s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (110.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:264: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-20210811005307-1387367
multinode_test.go:271: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-20210811005307-1387367
multinode_test.go:271: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-20210811005307-1387367: (13.393049562s)
multinode_test.go:276: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-20210811005307-1387367 --wait=true -v=8 --alsologtostderr
E0811 01:07:41.079473 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/client.crt: no such file or directory
E0811 01:08:04.808927 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210811004603-1387367/client.crt: no such file or directory
multinode_test.go:276: (dbg) Done: out/minikube-linux-arm64 start -p multinode-20210811005307-1387367 --wait=true -v=8 --alsologtostderr: (1m36.67992474s)
multinode_test.go:281: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-20210811005307-1387367
--- PASS: TestMultiNode/serial/RestartKeepsNodes (110.22s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:375: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210811005307-1387367 node delete m03
multinode_test.go:375: (dbg) Done: out/minikube-linux-arm64 -p multinode-20210811005307-1387367 node delete m03: (4.894861594s)
multinode_test.go:381: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210811005307-1387367 status --alsologtostderr
multinode_test.go:395: (dbg) Run:  docker volume ls
multinode_test.go:405: (dbg) Run:  kubectl get nodes
multinode_test.go:413: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.65s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (12.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:295: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210811005307-1387367 stop
multinode_test.go:295: (dbg) Done: out/minikube-linux-arm64 -p multinode-20210811005307-1387367 stop: (12.025553403s)
multinode_test.go:301: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210811005307-1387367 status
multinode_test.go:301: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-20210811005307-1387367 status: exit status 7 (124.617169ms)

                                                
                                                
-- stdout --
	multinode-20210811005307-1387367
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20210811005307-1387367-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210811005307-1387367 status --alsologtostderr
multinode_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-20210811005307-1387367 status --alsologtostderr: exit status 7 (128.591059ms)

                                                
                                                
-- stdout --
	multinode-20210811005307-1387367
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20210811005307-1387367-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0811 01:08:33.104412 1473592 out.go:298] Setting OutFile to fd 1 ...
	I0811 01:08:33.104563 1473592 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0811 01:08:33.104573 1473592 out.go:311] Setting ErrFile to fd 2...
	I0811 01:08:33.104577 1473592 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0811 01:08:33.104709 1473592 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/bin
	I0811 01:08:33.104893 1473592 out.go:305] Setting JSON to false
	I0811 01:08:33.104923 1473592 mustload.go:65] Loading cluster: multinode-20210811005307-1387367
	I0811 01:08:33.105311 1473592 status.go:253] checking status of multinode-20210811005307-1387367 ...
	I0811 01:08:33.105784 1473592 cli_runner.go:115] Run: docker container inspect multinode-20210811005307-1387367 --format={{.State.Status}}
	I0811 01:08:33.137882 1473592 status.go:328] multinode-20210811005307-1387367 host status = "Stopped" (err=<nil>)
	I0811 01:08:33.137906 1473592 status.go:341] host is not running, skipping remaining checks
	I0811 01:08:33.137912 1473592 status.go:255] multinode-20210811005307-1387367 status: &{Name:multinode-20210811005307-1387367 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0811 01:08:33.137936 1473592 status.go:253] checking status of multinode-20210811005307-1387367-m02 ...
	I0811 01:08:33.138260 1473592 cli_runner.go:115] Run: docker container inspect multinode-20210811005307-1387367-m02 --format={{.State.Status}}
	I0811 01:08:33.170324 1473592 status.go:328] multinode-20210811005307-1387367-m02 host status = "Stopped" (err=<nil>)
	I0811 01:08:33.170350 1473592 status.go:341] host is not running, skipping remaining checks
	I0811 01:08:33.170356 1473592 status.go:255] multinode-20210811005307-1387367-m02 status: &{Name:multinode-20210811005307-1387367-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (12.28s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (89.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:325: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:335: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-20210811005307-1387367 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0811 01:09:27.853904 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210811004603-1387367/client.crt: no such file or directory
multinode_test.go:335: (dbg) Done: out/minikube-linux-arm64 start -p multinode-20210811005307-1387367 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m28.882731471s)
multinode_test.go:341: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210811005307-1387367 status --alsologtostderr
multinode_test.go:355: (dbg) Run:  kubectl get nodes
multinode_test.go:363: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (89.67s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (48.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:424: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-20210811005307-1387367
multinode_test.go:433: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-20210811005307-1387367-m02 --driver=docker  --container-runtime=docker
multinode_test.go:433: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-20210811005307-1387367-m02 --driver=docker  --container-runtime=docker: exit status 14 (113.005081ms)

                                                
                                                
-- stdout --
	* [multinode-20210811005307-1387367-m02] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=12230
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20210811005307-1387367-m02' is duplicated with machine name 'multinode-20210811005307-1387367-m02' in profile 'multinode-20210811005307-1387367'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:441: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-20210811005307-1387367-m03 --driver=docker  --container-runtime=docker
multinode_test.go:441: (dbg) Done: out/minikube-linux-arm64 start -p multinode-20210811005307-1387367-m03 --driver=docker  --container-runtime=docker: (45.255545748s)
multinode_test.go:448: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-20210811005307-1387367
multinode_test.go:448: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-20210811005307-1387367: exit status 80 (527.580647ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20210811005307-1387367
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-20210811005307-1387367-m03 already exists in multinode-20210811005307-1387367-m03 profile
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	╭─────────────────────────────────────────────────────────────────────────────╮
	│                                                                             │
	│    * If the above advice does not help, please let us know:                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose               │
	│                                                                             │
	│    * Please attach the following file to the GitHub issue:                  │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:453: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-20210811005307-1387367-m03
multinode_test.go:453: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-20210811005307-1387367-m03: (2.603953002s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (48.57s)

                                                
                                    
x
+
TestDebPackageInstall/install_arm64_debian:sid/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_arm64_debian:sid/minikube
--- PASS: TestDebPackageInstall/install_arm64_debian:sid/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_arm64_debian:sid/kvm2-driver (14.38s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_arm64_debian:sid/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_docker_arm64/out:/var/tmp debian:sid sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_docker_arm64/out:/var/tmp debian:sid sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb": (14.384367614s)
--- PASS: TestDebPackageInstall/install_arm64_debian:sid/kvm2-driver (14.38s)

                                                
                                    
x
+
TestDebPackageInstall/install_arm64_debian:latest/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_arm64_debian:latest/minikube
--- PASS: TestDebPackageInstall/install_arm64_debian:latest/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_arm64_debian:latest/kvm2-driver (12.3s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_arm64_debian:latest/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_docker_arm64/out:/var/tmp debian:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_docker_arm64/out:/var/tmp debian:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb": (12.298535681s)
--- PASS: TestDebPackageInstall/install_arm64_debian:latest/kvm2-driver (12.30s)

                                                
                                    
x
+
TestDebPackageInstall/install_arm64_debian:10/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_arm64_debian:10/minikube
--- PASS: TestDebPackageInstall/install_arm64_debian:10/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_arm64_debian:10/kvm2-driver (12.6s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_arm64_debian:10/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_docker_arm64/out:/var/tmp debian:10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_docker_arm64/out:/var/tmp debian:10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb": (12.596485915s)
--- PASS: TestDebPackageInstall/install_arm64_debian:10/kvm2-driver (12.60s)

                                                
                                    
x
+
TestDebPackageInstall/install_arm64_debian:9/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_arm64_debian:9/minikube
--- PASS: TestDebPackageInstall/install_arm64_debian:9/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_arm64_debian:9/kvm2-driver (9.78s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_arm64_debian:9/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_docker_arm64/out:/var/tmp debian:9 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_docker_arm64/out:/var/tmp debian:9 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb": (9.775632402s)
--- PASS: TestDebPackageInstall/install_arm64_debian:9/kvm2-driver (9.78s)

                                                
                                    
x
+
TestDebPackageInstall/install_arm64_ubuntu:latest/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_arm64_ubuntu:latest/minikube
--- PASS: TestDebPackageInstall/install_arm64_ubuntu:latest/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_arm64_ubuntu:latest/kvm2-driver (15s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_arm64_ubuntu:latest/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_docker_arm64/out:/var/tmp ubuntu:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_docker_arm64/out:/var/tmp ubuntu:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb": (15.000560249s)
--- PASS: TestDebPackageInstall/install_arm64_ubuntu:latest/kvm2-driver (15.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_arm64_ubuntu:20.10/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_arm64_ubuntu:20.10/minikube
--- PASS: TestDebPackageInstall/install_arm64_ubuntu:20.10/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_arm64_ubuntu:20.10/kvm2-driver (13.98s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_arm64_ubuntu:20.10/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_docker_arm64/out:/var/tmp ubuntu:20.10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_docker_arm64/out:/var/tmp ubuntu:20.10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb": (13.977689359s)
--- PASS: TestDebPackageInstall/install_arm64_ubuntu:20.10/kvm2-driver (13.98s)

                                                
                                    
x
+
TestDebPackageInstall/install_arm64_ubuntu:20.04/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_arm64_ubuntu:20.04/minikube
--- PASS: TestDebPackageInstall/install_arm64_ubuntu:20.04/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_arm64_ubuntu:20.04/kvm2-driver (15.3s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_arm64_ubuntu:20.04/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_docker_arm64/out:/var/tmp ubuntu:20.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_docker_arm64/out:/var/tmp ubuntu:20.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb": (15.298156314s)
--- PASS: TestDebPackageInstall/install_arm64_ubuntu:20.04/kvm2-driver (15.30s)

                                                
                                    
x
+
TestDebPackageInstall/install_arm64_ubuntu:18.04/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_arm64_ubuntu:18.04/minikube
--- PASS: TestDebPackageInstall/install_arm64_ubuntu:18.04/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_arm64_ubuntu:18.04/kvm2-driver (12.49s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_arm64_ubuntu:18.04/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_docker_arm64/out:/var/tmp ubuntu:18.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb"
E0811 01:12:41.079315 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/client.crt: no such file or directory
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_docker_arm64/out:/var/tmp ubuntu:18.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb": (12.49320669s)
--- PASS: TestDebPackageInstall/install_arm64_ubuntu:18.04/kvm2-driver (12.49s)

                                                
                                    
x
+
TestInsufficientStorage (16.07s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-20210811011507-1387367 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-20210811011507-1387367 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (13.519193073s)

                                                
                                                
-- stdout --
	{"data":{"currentstep":"0","message":"[insufficient-storage-20210811011507-1387367] minikube v1.22.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"},"datacontenttype":"application/json","id":"a3597d6b-021b-4002-813e-48c646437863","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"message":"MINIKUBE_LOCATION=12230"},"datacontenttype":"application/json","id":"9f731fe1-b801-48e2-ae40-07c0de743161","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig"},"datacontenttype":"application/json","id":"e19b872c-1a55-4247-9f60-e6f2acc35d51","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube"},"datacontenttype":"application/json","id":"234ca334-a81d-4fb8-acab-e244fec1898b","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"},"datacontenttype":"application/json","id":"babb8538-d68c-4cdd-95fa-88f9bf9262ac","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"},"datacontenttype":"application/json","id":"a02377fb-ccbb-4d76-9167-b7717fd662ee","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"},"datacontenttype":"application/json","id":"dc504d99-bb3a-49d0-b064-cb91827cf8d3","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"message":"Your cgroup does not allow setting memory."},"datacontenttype":"application/json","id":"f0dd9f43-68a0-4dd2-a8cc-953ce4d6516f","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.warning"}
	{"data":{"message":"More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities"},"datacontenttype":"application/json","id":"804ab3eb-e6ef-4397-b824-2ca258b4e544","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20210811011507-1387367 in cluster insufficient-storage-20210811011507-1387367","name":"Starting Node","totalsteps":"19"},"datacontenttype":"application/json","id":"3019559e-3b90-4afd-a184-5ba64549792e","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"},"datacontenttype":"application/json","id":"31193dfd-6ba7-4bcc-b430-40b14bd693d0","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"},"datacontenttype":"application/json","id":"7313c7dc-e47b-4ee4-80a8-6baf1e078579","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity)","name":"RSRC_DOCKER_STORAGE","url":""},"datacontenttype":"application/json","id":"5d21c515-7c7d-4360-9bbf-63f15bfeae5f","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.error"}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-20210811011507-1387367 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-20210811011507-1387367 --output=json --layout=cluster: exit status 7 (274.646499ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20210811011507-1387367","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.22.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20210811011507-1387367","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0811 01:15:20.982279 1519339 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20210811011507-1387367" does not appear in /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-20210811011507-1387367 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-20210811011507-1387367 --output=json --layout=cluster: exit status 7 (283.221606ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20210811011507-1387367","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.22.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20210811011507-1387367","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0811 01:15:21.266482 1519371 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20210811011507-1387367" does not appear in /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	E0811 01:15:21.276245 1519371 status.go:557] unable to read event log: stat: stat /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/insufficient-storage-20210811011507-1387367/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-20210811011507-1387367" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-20210811011507-1387367
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-20210811011507-1387367: (1.988255178s)
--- PASS: TestInsufficientStorage (16.07s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (113.85s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Run:  /tmp/minikube-v1.17.0.299191352.exe start -p running-upgrade-20210811012105-1387367 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:128: (dbg) Done: /tmp/minikube-v1.17.0.299191352.exe start -p running-upgrade-20210811012105-1387367 --memory=2200 --vm-driver=docker  --container-runtime=docker: (59.527453937s)
version_upgrade_test.go:138: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-20210811012105-1387367 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0811 01:22:41.080225 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/client.crt: no such file or directory
version_upgrade_test.go:138: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-20210811012105-1387367 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (50.350734415s)
helpers_test.go:176: Cleaning up "running-upgrade-20210811012105-1387367" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-20210811012105-1387367
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-20210811012105-1387367: (2.639557982s)
--- PASS: TestRunningBinaryUpgrade (113.85s)

                                                
                                    
x
+
TestKubernetesUpgrade (137.36s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:224: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-20210811012403-1387367 --memory=2200 --kubernetes-version=v1.14.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:224: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-20210811012403-1387367 --memory=2200 --kubernetes-version=v1.14.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (59.227994125s)
version_upgrade_test.go:229: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-20210811012403-1387367
version_upgrade_test.go:229: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-20210811012403-1387367: (11.065600436s)
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-20210811012403-1387367 status --format={{.Host}}
version_upgrade_test.go:234: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-20210811012403-1387367 status --format={{.Host}}: exit status 7 (99.005818ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:236: status error: exit status 7 (may be ok)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-20210811012403-1387367 --memory=2200 --kubernetes-version=v1.22.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:245: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-20210811012403-1387367 --memory=2200 --kubernetes-version=v1.22.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (47.124118584s)
version_upgrade_test.go:250: (dbg) Run:  kubectl --context kubernetes-upgrade-20210811012403-1387367 version --output=json
version_upgrade_test.go:269: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:271: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-20210811012403-1387367 --memory=2200 --kubernetes-version=v1.14.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:271: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-20210811012403-1387367 --memory=2200 --kubernetes-version=v1.14.0 --driver=docker  --container-runtime=docker: exit status 106 (98.782575ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20210811012403-1387367] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=12230
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.22.0-rc.0 cluster to v1.14.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.14.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-20210811012403-1387367
	    minikube start -p kubernetes-upgrade-20210811012403-1387367 --kubernetes-version=v1.14.0
	    
	    2) Create a second cluster with Kubernetes 1.14.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20210811012403-13873672 --kubernetes-version=v1.14.0
	    
	    3) Use the existing cluster at version Kubernetes 1.22.0-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20210811012403-1387367 --kubernetes-version=v1.22.0-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:275: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:277: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-20210811012403-1387367 --memory=2200 --kubernetes-version=v1.22.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0811 01:26:07.854442 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210811004603-1387367/client.crt: no such file or directory
version_upgrade_test.go:277: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-20210811012403-1387367 --memory=2200 --kubernetes-version=v1.22.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (16.862638789s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-20210811012403-1387367" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-20210811012403-1387367
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-20210811012403-1387367: (2.79651956s)
--- PASS: TestKubernetesUpgrade (137.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (127.76s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-20210811011523-1387367 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.14.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-20210811011523-1387367 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.14.0: (2m7.7615509s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (127.76s)

                                                
                                    
x
+
TestPause/serial/Start (59.88s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:77: (dbg) Run:  out/minikube-linux-arm64 start -p pause-20210811011644-1387367 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:77: (dbg) Done: out/minikube-linux-arm64 start -p pause-20210811011644-1387367 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (59.882619737s)
--- PASS: TestPause/serial/Start (59.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (354.75s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context old-k8s-version-20210811011523-1387367 create -f testdata/busybox.yaml
start_stop_delete_test.go:169: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:340: "busybox" [e6b64db4-fa41-11eb-8d58-0242c9ee8e97] Pending
helpers_test.go:340: "busybox" [e6b64db4-fa41-11eb-8d58-0242c9ee8e97] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0811 01:17:41.080389 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/DeployApp
helpers_test.go:340: "busybox" [e6b64db4-fa41-11eb-8d58-0242c9ee8e97] Running
start_stop_delete_test.go:169: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 5m54.031652456s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context old-k8s-version-20210811011523-1387367 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (354.75s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (5.84s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:89: (dbg) Run:  out/minikube-linux-arm64 start -p pause-20210811011644-1387367 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:89: (dbg) Done: out/minikube-linux-arm64 start -p pause-20210811011644-1387367 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (5.81423853s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (5.84s)

                                                
                                    
x
+
TestPause/serial/Pause (0.69s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:107: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-20210811011644-1387367 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.69s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.32s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-20210811011644-1387367 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-20210811011644-1387367 --output=json --layout=cluster: exit status 2 (323.041391ms)

                                                
                                                
-- stdout --
	{"Name":"pause-20210811011644-1387367","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 14 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.22.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-20210811011644-1387367","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.32s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.59s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:118: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-20210811011644-1387367 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.59s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (3.96s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:107: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-20210811011644-1387367 --alsologtostderr -v=5
pause_test.go:107: (dbg) Done: out/minikube-linux-arm64 pause -p pause-20210811011644-1387367 --alsologtostderr -v=5: (3.955924648s)
--- PASS: TestPause/serial/PauseAgain (3.96s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.45s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:129: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-20210811011644-1387367 --alsologtostderr -v=5
pause_test.go:129: (dbg) Done: out/minikube-linux-arm64 delete -p pause-20210811011644-1387367 --alsologtostderr -v=5: (2.448809575s)
--- PASS: TestPause/serial/DeletePaused (2.45s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.37s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:139: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:165: (dbg) Run:  docker ps -a
pause_test.go:170: (dbg) Run:  docker volume inspect pause-20210811011644-1387367
pause_test.go:170: (dbg) Non-zero exit: docker volume inspect pause-20210811011644-1387367: exit status 1 (32.038586ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-20210811011644-1387367

                                                
                                                
** /stderr **
--- PASS: TestPause/serial/VerifyDeletedResources (0.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-20210811011523-1387367 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context old-k8s-version-20210811011523-1387367 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (10.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-20210811011523-1387367 --alsologtostderr -v=3
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-20210811011523-1387367 --alsologtostderr -v=3: (10.936882535s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (10.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-20210811011523-1387367 -n old-k8s-version-20210811011523-1387367
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-20210811011523-1387367 -n old-k8s-version-20210811011523-1387367: exit status 7 (94.639258ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-20210811011523-1387367 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.5s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:208: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-20210811012620-1387367
version_upgrade_test.go:208: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-20210811012620-1387367: (1.498857224s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.50s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (80.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-20210811012751-1387367 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.22.0-rc.0
E0811 01:28:04.808617 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210811004603-1387367/client.crt: no such file or directory
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-20210811012751-1387367 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.22.0-rc.0: (1m20.043359794s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (80.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.72s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context no-preload-20210811012751-1387367 create -f testdata/busybox.yaml
start_stop_delete_test.go:169: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:340: "busybox" [2b251243-2ddc-4413-b302-1f5b7ffe037c] Pending
helpers_test.go:340: "busybox" [2b251243-2ddc-4413-b302-1f5b7ffe037c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:340: "busybox" [2b251243-2ddc-4413-b302-1f5b7ffe037c] Running
start_stop_delete_test.go:169: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.026682631s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context no-preload-20210811012751-1387367 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.72s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-20210811012751-1387367 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context no-preload-20210811012751-1387367 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-20210811012751-1387367 --alsologtostderr -v=3
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-20210811012751-1387367 --alsologtostderr -v=3: (11.488257123s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-20210811012751-1387367 -n no-preload-20210811012751-1387367
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-20210811012751-1387367 -n no-preload-20210811012751-1387367: exit status 7 (122.742364ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-20210811012751-1387367 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (351.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-20210811012751-1387367 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.22.0-rc.0
E0811 01:32:41.080234 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/client.crt: no such file or directory
E0811 01:33:04.813100 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210811004603-1387367/client.crt: no such file or directory
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-20210811012751-1387367 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.22.0-rc.0: (5m50.728347277s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-20210811012751-1387367 -n no-preload-20210811012751-1387367
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (351.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:340: "kubernetes-dashboard-6fcdf4f6d-g65sd" [ce130039-0230-4b4f-ad1c-ff640055b9c3] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:340: "kubernetes-dashboard-6fcdf4f6d-g65sd" [ce130039-0230-4b4f-ad1c-ff640055b9c3] Running
start_stop_delete_test.go:247: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.02361027s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:340: "kubernetes-dashboard-6fcdf4f6d-g65sd" [ce130039-0230-4b4f-ad1c-ff640055b9c3] Running
start_stop_delete_test.go:260: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008548503s
start_stop_delete_test.go:264: (dbg) Run:  kubectl --context no-preload-20210811012751-1387367 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-arm64 ssh -p no-preload-20210811012751-1387367 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.54s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-20210811012751-1387367 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Done: out/minikube-linux-arm64 pause -p no-preload-20210811012751-1387367 --alsologtostderr -v=1: (1.098888731s)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-20210811012751-1387367 -n no-preload-20210811012751-1387367
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-20210811012751-1387367 -n no-preload-20210811012751-1387367: exit status 2 (315.849618ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-20210811012751-1387367 -n no-preload-20210811012751-1387367
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-20210811012751-1387367 -n no-preload-20210811012751-1387367: exit status 2 (314.744633ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-20210811012751-1387367 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-20210811012751-1387367 -n no-preload-20210811012751-1387367
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-20210811012751-1387367 -n no-preload-20210811012751-1387367
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.54s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (73.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-20210811013550-1387367 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.21.3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-20210811013550-1387367 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.21.3: (1m13.525580143s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (73.53s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.5s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context embed-certs-20210811013550-1387367 create -f testdata/busybox.yaml
start_stop_delete_test.go:169: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:340: "busybox" [1c527fb5-3cf2-4371-a298-84ab9adef47a] Pending
helpers_test.go:340: "busybox" [1c527fb5-3cf2-4371-a298-84ab9adef47a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/DeployApp
helpers_test.go:340: "busybox" [1c527fb5-3cf2-4371-a298-84ab9adef47a] Running

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:169: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.036632477s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context embed-certs-20210811013550-1387367 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.50s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-20210811013550-1387367 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-20210811013550-1387367 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.060387934s)
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context embed-certs-20210811013550-1387367 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-20210811013550-1387367 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-20210811013550-1387367 --alsologtostderr -v=3: (11.22409298s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-20210811013550-1387367 -n embed-certs-20210811013550-1387367
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-20210811013550-1387367 -n embed-certs-20210811013550-1387367: exit status 7 (110.273785ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-20210811013550-1387367 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (392.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-20210811013550-1387367 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.21.3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-20210811013550-1387367 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.21.3: (6m31.830448388s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-20210811013550-1387367 -n embed-certs-20210811013550-1387367
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (392.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:340: "kubernetes-dashboard-6fcdf4f6d-ztnf2" [b98f9509-5548-473b-a76f-dd2739982217] Running

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.01831044s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:340: "kubernetes-dashboard-6fcdf4f6d-ztnf2" [b98f9509-5548-473b-a76f-dd2739982217] Running

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007042882s
start_stop_delete_test.go:264: (dbg) Run:  kubectl --context embed-certs-20210811013550-1387367 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-arm64 ssh -p embed-certs-20210811013550-1387367 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-20210811013550-1387367 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-20210811013550-1387367 -n embed-certs-20210811013550-1387367
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-20210811013550-1387367 -n embed-certs-20210811013550-1387367: exit status 2 (310.912721ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-20210811013550-1387367 -n embed-certs-20210811013550-1387367

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-20210811013550-1387367 -n embed-certs-20210811013550-1387367: exit status 2 (374.372894ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-20210811013550-1387367 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-20210811013550-1387367 -n embed-certs-20210811013550-1387367
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-20210811013550-1387367 -n embed-certs-20210811013550-1387367
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (69.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-different-port-20210811014415-1387367 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.21.3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-different-port-20210811014415-1387367 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.21.3: (1m9.259034214s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/FirstStart (69.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (9.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context default-k8s-different-port-20210811014415-1387367 create -f testdata/busybox.yaml

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:169: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:340: "busybox" [0c02785c-5088-49e9-bf43-bca35e288d32] Pending

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/DeployApp
helpers_test.go:340: "busybox" [0c02785c-5088-49e9-bf43-bca35e288d32] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/DeployApp
helpers_test.go:340: "busybox" [0c02785c-5088-49e9-bf43-bca35e288d32] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:169: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: integration-test=busybox healthy within 9.037253177s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context default-k8s-different-port-20210811014415-1387367 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-different-port/serial/DeployApp (9.89s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (1.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-different-port-20210811014415-1387367 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-different-port-20210811014415-1387367 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.006037439s)
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context default-k8s-different-port-20210811014415-1387367 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (1.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (11.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-different-port-20210811014415-1387367 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-different-port-20210811014415-1387367 --alsologtostderr -v=3: (11.363161445s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Stop (11.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-different-port-20210811014415-1387367 -n default-k8s-different-port-20210811014415-1387367
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-different-port-20210811014415-1387367 -n default-k8s-different-port-20210811014415-1387367: exit status 7 (105.434238ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-different-port-20210811014415-1387367 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (362.66s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-different-port-20210811014415-1387367 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.21.3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-different-port-20210811014415-1387367 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.21.3: (6m2.071458815s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-different-port-20210811014415-1387367 -n default-k8s-different-port-20210811014415-1387367
--- PASS: TestStartStop/group/default-k8s-different-port/serial/SecondStart (362.66s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (66.49s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-20210811014611-1387367 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.22.0-rc.0
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-20210811014611-1387367 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.22.0-rc.0: (1m6.491673642s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (66.49s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-20210811014611-1387367 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:178: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-20210811014611-1387367 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.161813149s)
start_stop_delete_test.go:184: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-20210811014611-1387367 --alsologtostderr -v=3
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-20210811014611-1387367 --alsologtostderr -v=3: (11.198267435s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-20210811014611-1387367 -n newest-cni-20210811014611-1387367
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-20210811014611-1387367 -n newest-cni-20210811014611-1387367: exit status 7 (94.743907ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-20210811014611-1387367 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (26.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-20210811014611-1387367 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.22.0-rc.0
E0811 01:47:31.557307 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210811011523-1387367/client.crt: no such file or directory
E0811 01:47:31.562537 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210811011523-1387367/client.crt: no such file or directory
E0811 01:47:31.572749 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210811011523-1387367/client.crt: no such file or directory
E0811 01:47:31.592996 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210811011523-1387367/client.crt: no such file or directory
E0811 01:47:31.633264 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210811011523-1387367/client.crt: no such file or directory
E0811 01:47:31.713553 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210811011523-1387367/client.crt: no such file or directory
E0811 01:47:31.874138 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210811011523-1387367/client.crt: no such file or directory
E0811 01:47:32.194647 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210811011523-1387367/client.crt: no such file or directory
E0811 01:47:32.835156 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210811011523-1387367/client.crt: no such file or directory
E0811 01:47:34.115369 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210811011523-1387367/client.crt: no such file or directory
E0811 01:47:36.676508 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210811011523-1387367/client.crt: no such file or directory
E0811 01:47:41.080311 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/client.crt: no such file or directory
E0811 01:47:41.797244 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210811011523-1387367/client.crt: no such file or directory
E0811 01:47:52.038110 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210811011523-1387367/client.crt: no such file or directory
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-20210811014611-1387367 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.22.0-rc.0: (25.872181348s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-20210811014611-1387367 -n newest-cni-20210811014611-1387367
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (26.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:246: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:257: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.43s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-arm64 ssh -p newest-cni-20210811014611-1387367 "sudo crictl images -o json"
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.43s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.04s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-20210811014611-1387367 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-20210811014611-1387367 -n newest-cni-20210811014611-1387367
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-20210811014611-1387367 -n newest-cni-20210811014611-1387367: exit status 2 (317.731018ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-20210811014611-1387367 -n newest-cni-20210811014611-1387367
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-20210811014611-1387367 -n newest-cni-20210811014611-1387367: exit status 2 (327.420872ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-20210811014611-1387367 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-20210811014611-1387367 -n newest-cni-20210811014611-1387367
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-20210811014611-1387367 -n newest-cni-20210811014611-1387367
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (75.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p auto-20210811011758-1387367 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=docker
E0811 01:48:04.813098 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/functional-20210811004603-1387367/client.crt: no such file or directory
E0811 01:48:12.518844 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210811011523-1387367/client.crt: no such file or directory
E0811 01:48:53.479201 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210811011523-1387367/client.crt: no such file or directory
E0811 01:49:12.386810 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/no-preload-20210811012751-1387367/client.crt: no such file or directory
net_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p auto-20210811011758-1387367 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=docker: (1m15.577711179s)
--- PASS: TestNetworkPlugins/group/auto/Start (75.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-20210811011758-1387367 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context auto-20210811011758-1387367 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:340: "netcat-66fbc655d5-t9sqz" [3071146f-c1ca-4c46-af59-962fd5ad5877] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:340: "netcat-66fbc655d5-t9sqz" [3071146f-c1ca-4c46-af59-962fd5ad5877] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.007313409s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:162: (dbg) Run:  kubectl --context auto-20210811011758-1387367 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:181: (dbg) Run:  kubectl --context auto-20210811011758-1387367 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (5.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:231: (dbg) Run:  kubectl --context auto-20210811011758-1387367 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:231: (dbg) Non-zero exit: kubectl --context auto-20210811011758-1387367 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.241229715s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/auto/HairPin (5.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (61.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p false-20210811011758-1387367 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker  --container-runtime=docker
E0811 01:50:15.399462 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210811011523-1387367/client.crt: no such file or directory
net_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p false-20210811011758-1387367 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker  --container-runtime=docker: (1m1.132016113s)
--- PASS: TestNetworkPlugins/group/false/Start (61.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-arm64 ssh -p false-20210811011758-1387367 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (11.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context false-20210811011758-1387367 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:340: "netcat-66fbc655d5-wtdqz" [85fee5b8-c40a-4a0e-b526-378ea63ad857] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:340: "netcat-66fbc655d5-wtdqz" [85fee5b8-c40a-4a0e-b526-378ea63ad857] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 11.008770407s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (11.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:162: (dbg) Run:  kubectl --context false-20210811011758-1387367 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:181: (dbg) Run:  kubectl --context false-20210811011758-1387367 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (5.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:231: (dbg) Run:  kubectl --context false-20210811011758-1387367 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:231: (dbg) Non-zero exit: kubectl --context false-20210811011758-1387367 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.200345543s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/false/HairPin (5.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (8.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:340: "kubernetes-dashboard-6fcdf4f6d-xwjrp" [d33a5bd0-e393-45c3-890e-510be2f778d9] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:340: "kubernetes-dashboard-6fcdf4f6d-xwjrp" [d33a5bd0-e393-45c3-890e-510be2f778d9] Running
start_stop_delete_test.go:247: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 8.033437519s
--- PASS: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (8.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:340: "kubernetes-dashboard-6fcdf4f6d-xwjrp" [d33a5bd0-e393-45c3-890e-510be2f778d9] Running
start_stop_delete_test.go:260: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007350137s
start_stop_delete_test.go:264: (dbg) Run:  kubectl --context default-k8s-different-port-20210811014415-1387367 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.77s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-arm64 ssh -p default-k8s-different-port-20210811014415-1387367 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.77s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Pause (4.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-different-port-20210811014415-1387367 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-different-port-20210811014415-1387367 --alsologtostderr -v=1: (1.594814136s)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-different-port-20210811014415-1387367 -n default-k8s-different-port-20210811014415-1387367
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-different-port-20210811014415-1387367 -n default-k8s-different-port-20210811014415-1387367: exit status 2 (447.349316ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-different-port-20210811014415-1387367 -n default-k8s-different-port-20210811014415-1387367
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-different-port-20210811014415-1387367 -n default-k8s-different-port-20210811014415-1387367: exit status 2 (517.431717ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-different-port-20210811014415-1387367 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Done: out/minikube-linux-arm64 unpause -p default-k8s-different-port-20210811014415-1387367 --alsologtostderr -v=1: (1.301920299s)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-different-port-20210811014415-1387367 -n default-k8s-different-port-20210811014415-1387367
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-different-port-20210811014415-1387367 -n default-k8s-different-port-20210811014415-1387367
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Pause (4.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/Start (72.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p custom-weave-20210811011758-1387367 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker  --container-runtime=docker
E0811 01:54:04.126599 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/client.crt: no such file or directory
E0811 01:54:12.387237 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/no-preload-20210811012751-1387367/client.crt: no such file or directory
E0811 01:54:20.203243 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/auto-20210811011758-1387367/client.crt: no such file or directory
E0811 01:54:20.208689 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/auto-20210811011758-1387367/client.crt: no such file or directory
E0811 01:54:20.218961 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/auto-20210811011758-1387367/client.crt: no such file or directory
E0811 01:54:20.240375 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/auto-20210811011758-1387367/client.crt: no such file or directory
E0811 01:54:20.280631 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/auto-20210811011758-1387367/client.crt: no such file or directory
E0811 01:54:20.360958 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/auto-20210811011758-1387367/client.crt: no such file or directory
E0811 01:54:20.521668 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/auto-20210811011758-1387367/client.crt: no such file or directory
E0811 01:54:20.841839 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/auto-20210811011758-1387367/client.crt: no such file or directory
E0811 01:54:21.482622 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/auto-20210811011758-1387367/client.crt: no such file or directory
E0811 01:54:22.763279 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/auto-20210811011758-1387367/client.crt: no such file or directory
E0811 01:54:25.324003 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/auto-20210811011758-1387367/client.crt: no such file or directory
E0811 01:54:30.445080 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/auto-20210811011758-1387367/client.crt: no such file or directory
E0811 01:54:40.685596 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/auto-20210811011758-1387367/client.crt: no such file or directory
net_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p custom-weave-20210811011758-1387367 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker  --container-runtime=docker: (1m12.305313797s)
--- PASS: TestNetworkPlugins/group/custom-weave/Start (72.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-weave-20210811011758-1387367 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-weave/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/NetCatPod (9.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context custom-weave-20210811011758-1387367 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:340: "netcat-66fbc655d5-sz85g" [22b5bc66-0b1d-4ad4-8d6c-03ff20549e2e] Pending
helpers_test.go:340: "netcat-66fbc655d5-sz85g" [22b5bc66-0b1d-4ad4-8d6c-03ff20549e2e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:340: "netcat-66fbc655d5-sz85g" [22b5bc66-0b1d-4ad4-8d6c-03ff20549e2e] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: app=netcat healthy within 9.014327886s
--- PASS: TestNetworkPlugins/group/custom-weave/NetCatPod (9.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (63.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-20210811011758-1387367 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=docker
E0811 01:55:24.965090 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/default-k8s-different-port-20210811014415-1387367/client.crt: no such file or directory
E0811 01:55:24.970733 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/default-k8s-different-port-20210811014415-1387367/client.crt: no such file or directory
E0811 01:55:24.981361 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/default-k8s-different-port-20210811014415-1387367/client.crt: no such file or directory
E0811 01:55:25.002083 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/default-k8s-different-port-20210811014415-1387367/client.crt: no such file or directory
E0811 01:55:25.042736 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/default-k8s-different-port-20210811014415-1387367/client.crt: no such file or directory
E0811 01:55:25.123497 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/default-k8s-different-port-20210811014415-1387367/client.crt: no such file or directory
E0811 01:55:25.284966 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/default-k8s-different-port-20210811014415-1387367/client.crt: no such file or directory
E0811 01:55:25.605489 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/default-k8s-different-port-20210811014415-1387367/client.crt: no such file or directory
E0811 01:55:26.246142 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/default-k8s-different-port-20210811014415-1387367/client.crt: no such file or directory
E0811 01:55:27.526700 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/default-k8s-different-port-20210811014415-1387367/client.crt: no such file or directory
E0811 01:55:30.086852 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/default-k8s-different-port-20210811014415-1387367/client.crt: no such file or directory
E0811 01:55:35.207563 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/default-k8s-different-port-20210811014415-1387367/client.crt: no such file or directory
E0811 01:55:35.430152 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/no-preload-20210811012751-1387367/client.crt: no such file or directory
E0811 01:55:40.286691 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/false-20210811011758-1387367/client.crt: no such file or directory
E0811 01:55:40.292327 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/false-20210811011758-1387367/client.crt: no such file or directory
E0811 01:55:40.302933 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/false-20210811011758-1387367/client.crt: no such file or directory
E0811 01:55:40.323149 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/false-20210811011758-1387367/client.crt: no such file or directory
E0811 01:55:40.363365 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/false-20210811011758-1387367/client.crt: no such file or directory
E0811 01:55:40.443577 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/false-20210811011758-1387367/client.crt: no such file or directory
E0811 01:55:40.603905 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/false-20210811011758-1387367/client.crt: no such file or directory
E0811 01:55:40.924382 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/false-20210811011758-1387367/client.crt: no such file or directory
E0811 01:55:41.565318 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/false-20210811011758-1387367/client.crt: no such file or directory
E0811 01:55:42.128122 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/auto-20210811011758-1387367/client.crt: no such file or directory
E0811 01:55:42.846054 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/false-20210811011758-1387367/client.crt: no such file or directory
E0811 01:55:45.406902 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/false-20210811011758-1387367/client.crt: no such file or directory
E0811 01:55:45.448071 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/default-k8s-different-port-20210811014415-1387367/client.crt: no such file or directory
E0811 01:55:50.527401 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/false-20210811011758-1387367/client.crt: no such file or directory
E0811 01:56:00.768394 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/false-20210811011758-1387367/client.crt: no such file or directory
E0811 01:56:05.928488 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/default-k8s-different-port-20210811014415-1387367/client.crt: no such file or directory
net_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-20210811011758-1387367 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=docker: (1m3.335693107s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (63.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-20210811011758-1387367 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context enable-default-cni-20210811011758-1387367 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:340: "netcat-66fbc655d5-m86sj" [87306504-ac81-410f-b341-8dbcc6ce2331] Pending
helpers_test.go:340: "netcat-66fbc655d5-m86sj" [87306504-ac81-410f-b341-8dbcc6ce2331] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:340: "netcat-66fbc655d5-m86sj" [87306504-ac81-410f-b341-8dbcc6ce2331] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.010659529s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20210811011758-1387367 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:181: (dbg) Run:  kubectl --context enable-default-cni-20210811011758-1387367 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:231: (dbg) Run:  kubectl --context enable-default-cni-20210811011758-1387367 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (85.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-20210811011758-1387367 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=docker
E0811 01:56:21.249191 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/false-20210811011758-1387367/client.crt: no such file or directory
E0811 01:56:46.888613 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/default-k8s-different-port-20210811014415-1387367/client.crt: no such file or directory
E0811 01:57:02.210209 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/false-20210811011758-1387367/client.crt: no such file or directory
E0811 01:57:04.049159 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/auto-20210811011758-1387367/client.crt: no such file or directory
E0811 01:57:31.557306 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/old-k8s-version-20210811011523-1387367/client.crt: no such file or directory
E0811 01:57:41.079952 1387367 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-docker-12230-1385598-4e32f41c836e9c021a12ab8ec720ab6aea4bc3f0/.minikube/profiles/addons-20210811003021-1387367/client.crt: no such file or directory
net_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-20210811011758-1387367 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=docker: (1m25.098404883s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (85.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:106: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:340: "kindnet-92rmf" [7f77f8e4-b011-47f6-b357-69b881fb8e42] Running
net_test.go:106: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.034087395s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-20210811011758-1387367 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.35s)

                                                
                                    

Test skip (25/246)

x
+
TestDownloadOnly/v1.14.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.14.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.14.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/kubectl
aaa_download_only_test.go:154: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.14.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.21.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.21.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/kubectl
aaa_download_only_test.go:154: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.21.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.22.0-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.22.0-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/kubectl
aaa_download_only_test.go:154: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.22.0-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (13.46s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:226: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-20210811003008-1387367 --force --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:226: (dbg) Done: out/minikube-linux-arm64 start --download-only -p download-docker-20210811003008-1387367 --force --alsologtostderr --driver=docker  --container-runtime=docker: (13.01467916s)
aaa_download_only_test.go:238: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:176: Cleaning up "download-docker-20210811003008-1387367" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-20210811003008-1387367
--- SKIP: TestDownloadOnlyKic (13.46s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:398: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:46: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:115: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:188: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1541: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:527: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:96: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:96: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:96: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:39: Only test none driver.
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestPreload (0s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:36: skipping TestPreload - not yet supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestPreload (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:43: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:91: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-20210811014414-1387367" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-20210811014414-1387367
--- SKIP: TestStartStop/group/disable-driver-mounts (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:76: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:176: Cleaning up "flannel-20210811011758-1387367" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p flannel-20210811011758-1387367
--- SKIP: TestNetworkPlugins/group/flannel (0.31s)

                                                
                                    
Copied to clipboard